Warnings of an Energy Crunch: What are the Implications for the Data Centre?

By David Palmer-Stevens, SI Manager, EMEA, Panduit.

  • Monday, 21st October 2013 Posted 11 years ago in by Phil Alsop

Energy regulator Ofgem's recent warnings that Britain could face power blackouts in the next 18 months served as a stark reminder that our depleting electricity supplies are being stretched to capacity. It's a sign that power supplies, as we have known them could be on the way out, as the margins of capacity to demand narrow to between 2pc and 5pc by 2015.

Whilst we could sit back and wait for this to happen, I'd argue that we need to get our resources together now to prepare for the inevitable. The focus needs to be on optimising power consumption and maintaining mission critical IT during the black outs and cross-departmental teams need to work together to make this happen. This includes the Facilities staff which manage the power consumption and location of the physical resources in the Data Centre; Switching, Server and Storage staff who will know the status and standard compliance of IT resources and the applications team who will understand the requirements of mission critical applications and, most importantly, the requirements of the non-essential applications. Together, these teams needs to map out the power consumption against the IT equipment by application. Knowing what applications cost to run and deliver and the location of the relevant IT assets means a 'power priority' map can be made of the Data Centre.


Doing the Maths
Then, it is a matter of simple arithmetic and determining which changes can be made to optimise the power supplies without compromising performance. We need to calculate the total power costs to drive the IT asset, cool the IT asset and secure the IT asset, per application over a year. From here we need to calculate what it costs to reduce the power dependency. For example, new Intel servers consume half the electrical power for the same CPU power as the previous generation of servers.


It should then be possible to calculate from the power map and if measures such as reducing the IT power by half and the cooling costs by half over 12 months enable the purchase of the new servers if supplemented with a normal equipment refresh budget. How about the Network switches; do these have the IEEE802.3az - the 'Power Down While Idle' standard - implemented. This is estimated to reduce the Switch power requirement by 70% with a similar reduction in cooling costs.


One further way to make substantial savings is to implement virtualization of applications which would reduce server and switching requirements by 70%-80% depending on the current server loading, with a similar reduction in cooling costs.


Build up from the Basics
Having made an IT resourcing plan it's then important to get the basics correct, which starts with the Cabinets. These should be sealed front to back, with doors that permit the required air flow. There should be no cables crossing air flow or service points and any cut through from floors or ceiling for cables to pass must be sealed against air leaks and paint brushes should not be used to seal air gaps. Ensure that IT equipment has a front to back air flow and, if not, ensure the correct ducting is used to rectify the air flow re-circulating as hot air will drastically reduce the efficiency of the cooling system.


You should also implement containment; hot and cold aisle is not efficient enough. There is a choice of hot or cold aisle containment or vertical exhaust systems which all have the same thermal foot print and will reduce cooling costs by 23%.* The fans in the cooling system should incorporate variable torque motors as opposed to traditional fixed torque motors; the reason for this is the variable load. If the load on a fixed torque motor is reduced by 20%, the electricity required will drop by 20%. However, with variable torque motors, the load drops by 20% and the electricity drops by 50%. This feature can be used to your advantage when sizing your cooling. Instead of running cooling units at 100% you can purchase an extra cooling unit and run them at 76% load and the effect will be to reduce the cooling costs by 50%.


Power Supplies - Understand the load patterns
With anticipated power black outs we also need to address the UPS back up. The first step is to establish which business applications are mission critical and if you need to maintain business continuity on all tasks. There could be an argument that some non-critical systems are not maintained through a black out. If you have made the decision to virtualize you will have a virtual load so you will need UPS’s that have a uniform efficiency for the range of your variable load. This means you will need to get rid of any transformer based UPS’s.


The bottom line is, it's important to understand load patterns before we start suffering from the predicted power black outs. The reason for this is you might be able to use this information to your advantage because battery performance in non linear. As an example of this; if you are running applications from a battery backup and, for the given load, it is specified for 30 minutes back-up and you halve this load, you could actually get close to two hours of power.


Having mapped out your power profile, implemented your IT product strategy and fixed all the basics to minimize your power dependency in the data centre you need now to maintain it though the adds moves and changes that inevitably occur with IT deployments. Tools such as DCIM (Data Centre Infrastructure Management) can help to fine tune power savings that should be implemented: you cannot manage and maintain what you cannot see.
The prospect of an energy crunch is raising questions on resource management, across the industry. The good news is that there are measures that we can take now to ensure that - should this happen - we have laid the best possible foundations to protect against failures in service.