Accelerating containerised Big Data workloads for hybrid architectures

Hortonworks Data Platform, Hortonworks DataFlow, Hortonworks DataPlane and IBM Cloud Private for Data bring comprehensive enterprise data platforms to Red Hat OpenShift.

  • Tuesday, 11th September 2018 Posted 6 years ago in by Phil Alsop
Hortonworks, IBM and Red Hat have introduced an Open Hybrid Architecture Initiative, a new collaborative effort the companies can use to build a common enterprise deployment model that is designed to enable big data workloads to run in a hybrid manner across on-premises, multi-cloud and edge architectures.

As the initial phase of the initiative, the companies plan to work together to optimize Hortonworks Data PlatformHortonworks DataFlowHortonworks DataPlane and IBM Cloud Private for Data for use on Red Hat OpenShift, an industry-leading enterprise container and Kubernetes application platform. This can enable users to develop and deploy containerized big data workloads, ultimately making it easier for customers to manage data applications across hybrid cloud deployments. In addition, IBM and Hortonworks will extend their joint work to integrate key services offered through Hortonworks DataPlane with IBM Cloud Private for Data.

Enterprises are undergoing massive business model transformations, powered by the ability to process and analyze new types and tremendous amounts of data. As a result, many are moving to hybrid cloud environments that leverage lightweight microservices in the most efficient manner possible.

Hortonworks and IBM previously announced a collaboration to help businesses accelerate data-driven decision-making. Today’s news builds upon that foundation with the intent to bring big data workloads to a modern and container-based foundation, enabling customers to deploy Hortonworks and IBM platforms into a hybrid cloud environment powered by Red Hat OpenShift. The initiative includes the following:

  • Hortonworks plans to certify Hortonworks Data Platform, Hortonworks DataFlow and Hortonworks DataPlane as Red Hat Certified Containers on Red Hat OpenShift and looks forward to achieving “Primed” designation. In addition, Hortonworks will enhance HDP to adopt a cloud-native architecture for on-premises deployments by separating compute and storage and containerizing all Hortonworks Data Platform and Hortonworks DataFlow workloads. This allows customers to more easily adopt a hybrid architecture for big data applications and analytics, all with the common and trusted security features, data governance and operations that enterprises require.
  • IBM, which announced plans earlier this year to extend IBM Cloud Private platform and middleware to the OpenShift platform, has begun the Red Hat OpenShift certification process for IBM Cloud Private for Data, and has achieved “Primed” designation as the first phase. The move will help provide the vast OpenShift community of developers and users – which include IBM and Hortonworks clients – fast access to robust analytics, data science, machine learning, data management and governance capabilities, fully supported across hybrid clouds

“Kubernetes is the de facto container orchestration system and we have been working in this ecosystem since the project’s infancy to help make it ready for all, and especially the enterprise in Red Hat OpenShift. Today we are thrilled that Hortonworks and IBM Cloud Private’s data portfolio have selected Red Hat OpenShift as the trusted Kubernetes platform for big data workloads,” said Ashesh Badani, vice president and general manager, Cloud Platforms, Red Hat. “By building and managing their applications via containers and Kubernetes with OpenShift, customers and the big data ecosystem have opportunities to bring this next generation of big data workloads to the hybrid cloud and deliver the benefits of an agile, efficient, reliable, multi-cloud infrastructure.”

“The work that Red Hat, IBM and Hortonworks are doing to modernize enterprise big data workloads via containerization is aimed at helping customers to take advantage of the agility, economics and scale of a hybrid data architecture,” said Rob Bearden, chief executive officer of Hortonworks. “The innovations resulting from this collaboration can enable the seamless and trusted hybrid deployment model needed today by enterprises that are undergoing significant business model transformation.”

In addition to competitive and data challenges, organizations are also scrambling to bring applications once designed for public cloud behind the firewall for greater control, lower costs, greater security and easier management. In fact, in a recent IDC Cloud and AI Adoption Survey[1], more than 80 percent of respondents said they plan to move or repatriate data and workloads from public cloud environments behind the firewall to hosted private clouds or on-premises locations over the next year, because the initial expectations of a single public cloud provider were not realized.

“As these dynamics continue, they’ll work to slow innovation and hinder companies’ progression to enterprise AI,” said Rob Thomas, general manager, IBM Analytics. “Scaling the ladder to AI demands robust data prep, analytics, data science and governance, all of which are easily scaled and streamlined in the kind of containerized, Kubernetes-orchestrated environments that we’re talking about today.”