Measuring the resilience of IT services

White Paper 256 provides a forward-thinking methodology to improve the reliability of digital IT and critical data centre services, on which business continuity is most dependent.

  • Friday, 20th January 2017 Posted 7 years ago in by Phil Alsop
The nature of today’s digital economy and the surge in data from connected devices has fueled the growth of edge data centres, which ensure IT services remain reliable, resilient and available. Schneider Electric, the global specialist in energy management and automation, today released new research into data centre availability and why edge computing sites are having a disproportionate affect on the resilience and reliability of digital IT services.

 

The research detailed in Schneider Electric White Paper, #256, “Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge” suggests a new methodology which measures data centre availability. The approach is based on considering the criticality of all edge sites on which a business depends for its IT services and concludes that a greater concentration on physical infrastructure in smaller data centres is necessary to improve overall resilience.

 

“The industry is seeing a change in the way it delivers services to customers”, said Kevin Brown, CTO and SVP of Innovation of Schneider Electric’s IT Division.  “More businesses are utilizing a hybrid-cloud environment in which users in any one company access applications and services that may reside in several data centres, all being of different sizes and with differing levels of availability. This supply chain is only as strong as its weakest link, therefore the industry has to consider which services are the most business critical and create a secure method for ensuring they remain available to their users.”

 

Larger, centralised Tier 3 data centres are built to be highly resilient with multiple levels of redundancy, high standards of security and meticulous monitoring of all critical elements. Further down the chain are smaller regional data centres that nevertheless have similar high standards of monitoring and backup. But at the lowest level and nearest to the end-users are the micro data centres, which are often co-located on the customers premises and most susceptible to downtime.

 

“Smaller data centres are often found on company premises, with little or-no security, unorganised racks, no redundancy, without dedicated cooling and little or no DCIM software. These edge sites provide only a minority of services the business uses but are often of critical importance,” said Kevin Brown. “They may include proprietary applications, on which the company depends but also the network infrastructure necessary to connect to outsourced services.”

 

Schneider Electric’s research proposes that the overall availability of IT services to a business should be a product of the percentage downtime of all data centres providing critical functions.

Although large centralised data centres might have highly resilient uptime figures, typically 99.98% or more, when these are combined with a typical Tier 1 data centre on the edge whose corresponding benchmark is 99.67%, overall availability is reduced.

 

Further complicating the calculations are that more people in an organisation may be dependent on the locally hosted applications so that any downtime in such a centre will have a relatively high impact on business productivity.

 

Schneider Electric has devised a score-card methodology which factors systems availability from all sites, number of people impacted, criticality of each site, and annual downtime into a single dashboard which helps identify areas in need of attention.

 

“We need to rethink the design of the data centre systems at the edge of the network,” said Kevin Brown. “As an industry, we have to improve physical security, monitoring and increase redundancy in power, cooling and networking in micro data centres to improve the overall availability.”