Digital Infrastructure: what it is and why you need a DI strategy

Nearly 400 delegates attended 451 Research’s annual Hosting & Cloud Transformation Europe (HCTSEU) conference in London – more than one-quarter of them CxOs. The program included enterprise end users, service providers, datacenter owner/operators, investment banking industry executives, and a raft of venture capital firms sharing their experience and advice as takeaways for the audience. By William Fellows, principal analyst, 451 Research.

  • Wednesday, 29th May 2013 Posted 11 years ago in by Phil Alsop

451 Research data suggests that, on average, the datacenter market is only utilizing 10% of total compute power. Moreover, 60% of all IT infrastructure is intentionally over-deployed due to workload variance concerns. Provisioning for peak capacity is alive and well, it seems. We advocate the use of Digital Infrastructure strategies to ease these high-capital expenditure costs, combining a range of service-delivery models to optimize capacity provisioning and utilization.

Digital Infrastructure was the backdrop for the recent HCTSEU, and is the organizing principal for 451 Research itself. What is it? Digital Infrastructure strategies will better align facilities, IT hardware and software stacks, and service-delivery mechanisms with consumption requirements, in pursuit of an optimal or desired state. It is both holistic and hybrid. We think of Digital Infrastructure as the analog to physical infrastructure – as roads, trains and distribution depots. Cost is key, so DI applies an economic model for infrastructure. Every organization needs one, yet few are currently planful when it comes to digital manifests. However, it’s our understanding (from primary research with enterprise end users) that organizations that do not consistently improve the efficiency of their Digital Infrastructure will find themselves at a competitive disadvantage, in terms of both cost and business agility. That’s why we believe your Digital Infrastructure is simply ‘too big to fail.’

Taking this as the starting point, 451 chief strategy officer and co-head of 451 Advisors Tony Bishop discussed why ‘The Datacenter is the New Business Platform’ with Rob Bath, VP engineering, Europe, Digital Realty; Paul Boyns, head of IT strategy & policy at BBC; Barry Childe, head of HTS GBM research & innovation, HSBC; Roger De’Ath, enterprise sales, Google; and Chris Swan, CTO CohesiveFT. The group examined a range of innovations in infrastructure creating a foundation of ‘interactive and real-time’ information, connectivity, and processing capabilities of bewildering complexity. The summary is that organizations now need a Digital Infrastructure strategy for this new business platform – a platform that must be adaptive and agile; that must be efficient, resilient and secure; and must provide a consistent enriched customer experience.

The impact of energy
Andy Lawrence, 451 VP of research for datacenter technologies, set out to broaden the discussion around the datacenter costs, planning and energy use in the ‘Energy, Efficiency and Economics’ panel. This is a topic that not only recurred throughout the two days of HCTSEU, but which has been attracting considerable attention in recent months following a controversial article on the subject in the New York Times.

Lawrence argued that all businesses that are in any way dependent on datacenters need to understand the impact of energy costs. Over the 15-year depreciated life of a datacenter, energy can account for 15% of total costs, or as much as 30% if the IT equipment is excluded. While this is significant, it does not, however, cover the potential savings from improvements that follow from Moore’s Law, better utilization and improvements in facilities efficiency. Figures collected by 451 Research also showed how energy costs are rising in Europe, but have stabilized in the US, a trend that can be traced back to 2003.
Using a ‘miles per gallon’ model very loosely based on eBay’s ‘transactions per $ in energy,’ Lawrence hypothesized that big cloud service providers in the US, in particular, could become as much as 25 times more efficient, in terms of using their energy, than the average datacenters in Europe today. They are not only greener, but also much more cost-effective. A key discussion point was the potential volatility caused by the use of virtualization and new efficient equipment.

Panelists Ibrahim Chadirichi, director of information management, ARM; Patrick Griffiths, VP data centre engineering, Morgan Stanley; Liam Newcombe, CTO, Romonet; and Jim Shanahan, head of datacenter automation, ABB agreed that energy was a critical driver of datacenter costs, and a huge influence on future datacenter planning. Moreover, Shanahan and Griffiths shared their experience of exploring the use of smart-grid and demand response technologies to reduce energy consumption. So far, all these were developments that had yet to make their mark, but are expected to in the near future.

Disruptive technologies
451 Research’s datacenter technologies team also used HCTSEU to preview its forthcoming ‘Disruptive Datacenter Technologies’ report ahead of publication before Uptime Symposium on May 13 in Santa Clara, California. One point raised in the discussion, and amplified by Intel’s Alan Priestley, is that a lot has been done on the facilities side to improve the energy efficiency of datacenters (use of free air cooling, raising temperatures, improved datacenter infrastructure management tools, etc.), but not much has been done on the IT side. A lot of tools and techniques exist already, such as IT power management and Intel Datacenter Manager (DCM), for measuring, monitoring and controlling IT, but are not actively used at the moment. Why aren’t they used more? It’s down to concerns about uptime – anything that could interrupt service continuity is still viewed with skepticism – but also lack of awareness.

Converged infrastructure = better utility?
There was some interesting discussion about the role of converged infrastructure as a disruptive technology. Distinguished analyst John Abbott pointed to changing buying patterns in the server sector, tied in with evidence that unit server sales in Europe are declining. That’s partly due to the weak economic conditions and refresh cycles, but maturing levels of virtualization and the consolidation of existing resources to improve utilization are also factors. In this climate, there is increasing interest in converged infrastructure products in Europe – specifically as a disruptive technology. There’s a general move among systems companies away from the rigidity of the initial converged systems, with more integration options for third-party products and greater flexibility at the management layer to accommodate existing investments in management software.

Best execution venues
As outsourcing, managed hosting and cloud continue to converge, users are increasingly seeking options that allow them to select from these models and others as part of ‘best execution venue’ strategies that support a Digital Infrastructure. However, all of 451 Research VP William Fellows’ ‘Best Execution Venue’ panelists agreed that while there is a lot of talk about hybrid and multi-cloud, there are few actually doing it yet.

Gravitant CMO Praveen Asthana believes this goes to the heart of the hesitation many customers have with using cloud: Which cloud (private and/or which public) is best for a given app, workload or business need? This problem is exacerbated as the number of cloud providers and technologies proliferates, and users will need decision support and a way of managing this IT supply chain as part of Digital Infrastructure strategies. With that said, Gravitant has paying customers for its cloud management and brokering service, such as the State of Texas, which are actively deploying multiple clouds.
Doubly interesting is Asthana’s claim that a number of US public agencies are to begin mandating the use of cloud brokers in service procurement in order to ensure they can find best execution venues.
In the UK, Steve O’Connor, director of technology, Parliamentary ICT, has a range of preapproved venues and services already available via the government’s G-Cloud program. Already experienced with virtualization and cloud, PICT’s key challenge is whether to move lock, stock and barrel to Gmail. We’ve found that workspace services – usually packaged email, messaging, document and collaboration (but most often email) – act as a lightning rod, if not a tipping point, for accelerated cloud adoption. Recent end users we’ve spoke to that have gone down this route include Trinity Mirror Media and the Open University, for which it was a real ‘crossing the Rubicon’ experience in terms of the baked-in process change it brought with it.

O’Connor dispelled what he believes is an unnecessary and forensic focus on data sovereignty/protection issues in the cloud market. His experience in this area, which, via the Houses of Parliament, is very much in public view, has been good. Users coming to cloud, including PICT, have typically dealt with hosted services in some shape or form, and 451’s consistent advice has been to use and re-use any practices and policies that may be in place with existing providers. As far as O’Connor is concerned, the thorny issue is moving data and workloads between providers, or between in-house and the cloud.
This is where VMware’s vCloud Hybrid Cloud service should play, enabling users to move between different venues – on-premises, multi-tenant public and managed – to meet different workload needs. While this is not a VMware-only party – VMware’s vCloud Automation Center (nee DynamicOps) also supports third-party clouds – it is more directly where Cloudsoft plays, supporting application management into and across multiple clouds. Indeed, HCTSEU was an important road marker for Cloudsoft, cementing our belief that it has a role to play as a fully paid-up participant in cloud lifecycle management and brokering – it’s no longer only a PaaS play. There was also much talk of kittens, chickens and container ships here, but in essence, the feeling is that IT departments will need to adopt more industrialized processes.

Cloud and banking: lack of regulation and culture change are the barriers
HSBC’s Barry Childe observed that while the bank has some exposure to Amazon public cloud, it’s really a very small step compared with what it could be doing – it’s effectively a playground for testing. Indeed, speaking with banking industry executives at HCTSEU, including BNP Paribas, Morgan Stanley and ING, it’s clear that there is an appetite to take advantage of cloud-based and other hosted services.

The gating factors are principally twofold. First, the need for regulatory authorities – FSA in the UK – to grant appropriate licenses to providers, enabling them to act as bona fide suppliers and enshrining indemnity, availability and data protection obligations into contracts, thus allowing banks to do business with them. Second is the apparent inability of these huge investment banking organizations to deliver change (even where there is demonstrated innovation) because of risk and organizational cultures and structures allergic to change – aka death by committee. Childe highlighted the fact that that at many large financial services firms, it is in their consumer-facing retail organizations where innovation can be found. This is where banks can and must be responsive and competitive for the customer, especially around the consumerization of IT and mobility.

Authentication and the virtual machine
The struggle with authentication in the cloud was the burning issue of 451 Research director Wendy Nather’s fireside chat with SafeNet’s VP of cloud, Jason Hart. Authenticating users in the cloud is different from authenticating VMs – for one thing, users tend to have many different roles within different contexts, and that makes their authentication more complicated.

Virtual machines should be authenticated, as well – it provides an additional factor of authentication to the user, just as a mobile device does, and it’s important for service-to-service interactions in which users don’t take part. Hart pointed out that with business applications also being vulnerable to attack, encryption is critically important for security in this environment – and yet organizations are rushing to the cloud without considering security until later.
But how to preserve the integrity of VMs through encryption? Given that dynamic provisioning in the cloud can lead to VM sprawl, and that VM images can just as easily be stored as files and forgotten, organizations need to make sure that they aren’t stolen or tampered with.

Storage trinity: cheap, fast and reliable
Traditional storage technologies often struggle to meet service-provider requirements when they are building out their cloud services, both as stand-alone storage clouds and as part of a broader IaaS, PaaS or flexible managed hosting stack. Why? Because, historically, storage can provide only two of these three: high performance, low cost and reliability – and this, combined with the fact that storage remains complex, difficult to scale and lacking in quality-of-service (QoS) capabilities, places further constraints on xSPs looking to operate in an increasingly competitive market.

The idea of software-defined storage – a new breed of storage platforms that embrace elements such as scale-out, API-driven management, extensive use of commodity hardware and even open source – was discussed by 451 Research VP Simon Robison’s panelists as an enabler for next-generation storage in the cloud. It’s early on, for sure, but service providers willing to take a risk on a new startup technology are already experiencing the benefits. Julian Box, CEO of Channel Islands-based hosting provider Calligo, noted that his company’s deployment of all-flash array technology from SolidFire was for the first time allowing it to host mission-critical applications with fine-grained QoS control at a cost point that made sense.

Meanwhile, Basho Technologies is at the forefront of the move to open source storage after recently moving its RIAK Cloud Storage object platform to the community-shared model. For many Basho customers, it’s the rich S3-compatible API that is a crucial element when making cloud storage buildouts as simple as possible, according to EVP Bobby Patrick. So where is this all heading? Is the holy trinity of ‘cheap, fast and reliable’ storage a realistic goal for service providers? According to CloudFounders’ Joost Metten, the company is already able to provide this today. Innovation at the storage technology layer will continue to drive disruption over the coming years, and service providers are going to lead the charge.

Mobilizing workflows using cloud
With a panel of CIOs and senior IT leaders from both UK public and private sectors, 451 Research and Yankee Group analysts Chris Hazelton, Vishal Jain and Chris Marsh discussed the challenges executives are facing in their mobility deployments. The panel – Danny Attias, CIO Grass Roots; James Greenman, Group IT Director, Care UK; and Geoff Linnell, Group CIO, Celerant Consulting – explored the diverse requirements IT leaders face in managing their mobile workers. Whether it is Celerant’s consultants working full-time around the world at different customer sites, Care UK’s nationally distributed workforce delivering care services to patients in their homes, or Grass Roots’ increasingly mobile employees, this touches all organizations and markets. Vast geographic distances, tight government regulations and strongly independent employees provide the greatest challenges to IT as it looks to provide a secure mobile environment for employees.
The panelists shared their views on how they see cloud infrastructure and services impacting these and future projects. The panelists reported that the drivers for mobile cloud adoption – and they meant mobility rather than cloud – include cost reduction to productivity gains, competitive differentiation and regulatory requirements to leverage SaaS mobility management tools. For them, cloud enables both internal and external workflow mobilization. Cloud infrastructure and SaaS applications were discussed as ways to provide access to core business systems and management resources for employees working in mobile and remote locations, as well as a way to capture, compute, and analyze field-based data in B2B or B2C workflows.

There was most agreement on the difficulty of measuring an ROI on mobility. There is a need to gain a return on mobility investments, but each panelist had gone to differing lengths to measure and justify this return. As Attias pointed out, for some deployments the combination of mobility and cloud gives such obvious productivity gains that actually measuring the ROI in detail is not always necessary at all.
Enterprises cloud users: standardization and SaaS are drivers
Although representing a variety of industries and organization sizes, enterprise end users on managing director of Enterprise451 Ken Male’s panel have come to cloud via an adoption path very similar to that which took place for virtualization, beginning with test and development. The key takeaways were that IaaS usage is dwarfed by SaaS; that accelerated use of cloud is often driven by M&A, which means having to do things fast; and that cloud provides a catalyst for consolidation, such as ERP, HCM, CRM and UC/Collaboration workloads, onto SaaS.

We welcomed back Ian McDonald, head of infrastructure and cloud at News International, this year, and he provided an update on NI’s cloud deployment. The group has made a significant commitment to AWS, which is now referred to simply as its ‘Data Center #3.’ An internal cloud is known as NC3. NI is doing more with AWS around automation and is ‘moving up the stack’ – development work is now done in-cloud, and 20% of server loads run on AWS. It uses Google for Collaboration, plus salesforce.com Heroku. One issue is whether SaaS use will itself cause silos of IT and lead to a lack of integration. Key for the organization is to recognize this and have an application development strategy to address it.

Manuel Restrepo, CEO of BNP Paribas PSC Limited (Cardiff Insurance, which is a division of BNP), uses cloud to enable developers to work across several geographies, which has been a big cultural change for the organization. It is using AWS for test and dev, supporting a global development team spanning Asia, North America and Europe. It reportedly took three months less than expected to deploy, and required 25% less resources than physical systems. The initiative grew out of an internal startup that incubated the use of cloud plus a center of excellence, which was empowered as an agent of change to make these things happen.

IT director of Severn Trent Services Mark Gwynne says his organization now has a ‘cloud first’ policy with SaaS – collaboration (Office 365, Lync, SharePoint), ERP, HCM (single system of record for each employee) and CRM. M&A and standardization were key drivers for the move to cloud – it had disparate IT silos managed locally pre-cloud. It used an MPLS network upgrade as a change-enabler, which eliminated the need for regional approaches and enabled business units to create their own workflows with a service catalog.

Google gets high marks from Greenpeace
Andrew Hatton, head of IT at Greenpeace UK, also used SaaS as the on-ramp to cloud, migrating from Microsoft Exchange to Google and to PaaS with Force.com from salesforce.com. It’s now looking at file-sharing. Greenpeace has many affiliates globally, so it’s trying to use cloud as a way to centralize core business functions. The key for it is to work with companies that have a ‘green’ strategy with their datacenters – either in-use or on a near-term roadmap. Sustainability is key, and Google gets high marks from Greenpeace here.