2017: Big software is changing everything in the data centre

By Anand Krishnan, EVP and GM of Canonical’s Cloud Division.

  • Thursday, 22nd December 2016 Posted 7 years ago in by Phil Alsop
2016 was a phase change year for many technology companies. Enterprises and service providers were forced to look at new business models and technologies, and customer demands put pressure on these organizations. Knowledge and speed are the primary factors behind the technological disruptors that have affected nearly every industry and 2017 will be a continuation of this disruption.
 
Below are several predictions we believe will occur in 2017 and beyond:
 
  1. In 2017 the majority of new IT projects will deploy and operate ‘Big’ Software
Businesses are looking for capabilities that require a new class of software. Where applications were once primarily composed of a couple of components on a few machines, modern workloads like Cloud, Big Data & IoT are made up of multiple software components and integration points across thousands of physical and virtual machines. Organizations have begun to realize Big Software is both an opportunity and a challenge. By its nature, Big Software is forcing organizations to break from the norm away from traditional legacy software and deployment models. New tools, methods, and technologies are emerging that will help transform and manage the effects Big Software will have within the data centre and many organizations will begin to adopt the new tools and methods to manage this change.
 
2.             Companies will have unprecedented choices in how they deploy and consume software and services.
Software licenses are on the decline yet operating costs for IT departments have increased. To counter this, many companies are educating themselves on a variety of solutions and service providers that are the right fit for their business. IT operations have various substrates to choose from, and many organizations are turning to OpenStack for private cloud, and hybrid cloud offers a variety of flexible solutions to organizations. Forward thinking IT organizations will leverage an all-the-above approach that includes; containers, bare-metal, and public cloud as target substrates.
 
3.             LXD Containers will see more mainstream adoption.  
Docker put containers on the map. However, Docker is only part of the major change in how the industry is beginning to think about software deployment and integration. Organizations are facing fundamental limits in their ability to manage escalating complexity, especially within traditional virtual machine (VM) environments. LXD is the next generation of containers. LXD containers improve IT operations and have been proven successful in enabling a cost-effective scale-out infrastructure. Companies are seeing the rigid inflexibilities and increased costs of virtual machines and are beginning to turn to LXD containers to make their cloud deployments more efficient. LXD containers offer CIOs the ability to better manage their cloud infrastructure. One of the main drivers for container adoption is the emergence of Kubernetes for orchestrating and managing Linux containers as well as Docker for deploying process containers which enable microservices and provides a mechanism to bring cloud services online faster that are more cost effective with an immediate ROI.
 
4.             Application Modeling will emerge as the most viable way to manage cloud deployments.
 
By definition, model-driven operations enable reusable, open source software deployment methodologies across hybrid cloud and physical or private cloud infrastructures (i.e. OpenStack). Applications and operations are encoded in these systems by diverse vendors, with the goal of providing seamless integration and scaling. Additionally, these applications allow services instantly integrate via relationships between disparate solutions without service interruption or changes in the physical network. These tools and applications are interoperable and functional across platforms, achieved by creating interoperability between different open source management systems, for example, between WordPress, Hadoop, MongoDB, MySQL, containers/Kubernetes, etc. These relationships allow the complexity of integrating services to be automated, giving users a more optimal experience.
Service modeling allows network administrators to free up the time to focus on bringing to market revenue-generating solutions and services, rather than architecting complicated network stacks and deploying additional resources. What matters to the developer is what services are involved - not the details of how many machines they need, or which cloud they are on, or whether they are big machines or small machines, or whether all the services installed are on the same machine. 2017 will see a shift from software and infrastructure orchestration to software modeling to deploy make the task of deploying distributed systems more efficient and faster.  
 
5.             2017 will be the year OpenStack and containers will move beyond POC (proof of concept) to deploying solutions that solve real-world business problems.
As enterprises and operators deploy Infrastructure as a Service internally, there will be many challenges, opportunities, and phase shifts along the way. However, there will remain three pillars that will be the foundation for years to come. They include NFV on OpenStack, containers, and legacy bare metal servers/equipment. Alone, each of these solutions solves a specific technology challenge, but together, they provide a comprehensive strategy that allows enterprises and operators to bring new services that are secure, efficient, elastic, and at scale. Many systems of the past were “black boxes” that had a single function or service. As the industry evolves, these “boxes” are becoming application servers capable of doing more than one thing well from TOR (top of rack), DSLAM units (Digital Subscriber Line Access Multiplexer), and even set-top boxes.