Cool moves

Steps to reduce power consumption without compromising on cooling. By Carrie Higbie, Global Director of Data Centre Solutions and Services for Siemon.

  • Tuesday, 27th August 2013 Posted 11 years ago in by Phil Alsop

Given that IT activity accounts for two per cent of global CO2 emissions, it’s easy to imagine the enormous amount spent on powering our data centres. This endless operating expense drains budgets and taints our industry’s reputation, so what steps can be taken to pave the way towards reducing power consumption without compromising on cooling? Focusing on practical ideas and minimum cost initiatives that can be easily justified by rapid payback, there are low cost steps that can be made individually, as resources allow, for measurable improvement and also a list of practical and achievable tips for lowering carbon emissions and greening the data centre from the physical layer upwards.

For the data centre, it’s essential that we focus on sensible decisions, practical ideas and minimum cost initiatives that can be easily justified by rapid payback. By making the right choices at the outset and then considering low cost steps that can be made individually, as resources allow, it’s easily possible to demonstrate measurable improvement. These are realistic recommendations for lowering carbon emissions and greening the data centre, no matter where it is in its lifecycle.

Infrastructure architecture
If you are building or expanding your data centre, the cabling architecture chosen will influence long term power consumption. An any-to-all structured cabling configuration, using a distribution area as outlined in TIA 942-A and ISO 24764, will allow you to place your servers where it makes the most sense for power and cooling, without the distance limitations of point to point cables and the without concern of switch port availability as in a top of rack (ToR) configuration. If you are pre-cabled using an any-to-all design, a server can be deployed anywhere and connected to any switch port via the central patching cross connect within the distribution area. A structured cabling system may require more cabling than a ToR topology, but if, for example, it keeps you from adding supplemental cooling then the payback is excellent. According to some real world comparative designs, the structured cabling cost for any-to-all zones is less than half the cost of proprietary point to point cables. Based on average power figures per rack, the cost for the number of unused ports in a ToR topology far exceeds structured cabling costs. Structured cabling costs equate to only 10 to 15 per cent of the cost of unused ports alone. Not to mention the annual maintenance costs and power cost for the additional switches required in a ToR configuration. Additional switches in each cabinet with ports that can’t be used likewise leads to stranded power. Using structured cabling can provide zones with up to 100m cabling channels enabling greater flexibility in data centre design, versus point to point cable assemblies used in ToR configurations which are typically limited to seven metres in passive mode.


Look for stranded power
Stranded power is power that is provisioned and not consumed. There are various culprits: The first is using the power rating on power supplies as a means for provisioning power. The second source of stranded power is simply poor decommissioning practices. Third is over-provisioning of switch ports, followed by overly redundant connections.


Power monitoring software is critical in determining both stranded power and compute to power ratios. You can’t fix what you don’t measure and you can’t determine if the fixes work if you don’t measure them. Investing in intelligent power distribution units (PDUs) will help. Proper change management and decommissioning practices will decrease the problems in the future. When equipment retires through attrition or refresh, the old equipment should be removed, the power added back to provisioning and it should be determined if the new equipment will require the same resources as the old.


Use what you have first
Look at your applications, risk, network ports and resiliency. Many data centres move to new locations, move to clouds, or build new, believing that they are out of capacity. But many are not even close to capacity. Every server has two network cards, two power supplies, and dual storage connections. Which of these servers really needs two of everything? There are a lot of applications that don’t require that level of uptime based on the risk of it being unavailable for a short period of time. Some may require that level of redundancy, and others provide software redundancy and failover which may remove the need for dual power.


Review power consumption
Look at your power consumption - and this does not mean PDU output. You have to compare consumption to supply to find out how efficiently the servers are running and when they consume power resources. Investment in PDUs can provide commissioning and power usage statistics over time. Some servers may only need to run at certain times of the month, not continuously. You can virtualise them, or just power cycle them off during periods of non-use. If you are virtualising, don’t just look at CPU utilisation, look at power consumption as a ratio to CPU utilisation. Get rid of anything energy hogging, where possible. It may mean shuffling some resources, but can lead to significant savings.


Be cool, not cold
Find out the maximum operating temperature supported by your active electronics manufacturers; most will support higher temperatures than previously recommended in the data centre. Have a cooling assessment performed to determine whether cooling expectations are being met and if they can be optimised.


Air control
Control of airflow in the room is critical. This includes blanking panels, sealed floor penetrations and control of how air is distributed in the room. If you are in a hot/cold aisle arrangement then cabinets should be in a continuous row without spaces between adjacent cabinets. Avoid mixing hot and cold air at the air intake of your equipment. For higher densities hot or cold aisle containment systems may be advantageous. Chimneys also provide a benefit by controlling hot air exhaust flow for optimal intake by CRAC units.


Select cabinets designed with ample cable management such as vertical zero-U spaces that can improve overall airflow by moving cabling away from equipment cooling fans. High-flow front and rear doors will facilitate good airflow to ensure proper hot aisle/cold aisle circulation. Targeted cooling with cabinet thermal management can be a highly efficient approach for higher density areas. Rear cooling door heat exchangers, for example, can reduce capital costs by adding cooling only when and where high density heat loads exist. Cooling doors and other close coupled/spot cooling can decrease CAPEX and OPEX costs in the space.


Route cabling with care
If properly designed, under floor cabling won’t have an adverse affect on your cooling. But if allowed to get unruly over time, not only will it hurt airflow, but you could have performance issues - particularly in UTP systems if cables are crushed or bent and the twists in the pairs are compromised. Overhead systems can have the same problems if additional layers of tray are required and end up over the rear of cabinets, effectively putting a ceiling over the hot air and preventing it from being pulled out of the room by CRAC units. Challenge your installer to plan your cable routing with care, both inter and intra-cabinet. Proper routing can have a significant impact on airflow and cooling efficiency.


Declutter and decommission
A significant number of older data centres have suffered from ill-managed MACs (moves, adds and changes) over the years, leaving abandoned cabling channels behind. This unused cabling often creates air dams under floor or safety issues overhead. Remove old cables that are no longer required in cabinets and pathways.


Intelligent infrastructure management systems can provide an auditing advantage by allowing detailed monitoring and real-time documentation and tracking of MACs. By providing a consistent and up-to-date record of the physical layer connections, channels can be dynamically managed to assure full utilisation of switch ports, decreasing the power needs for electronics and keeping unused ports to a minimum.


Lean and green
Greening the data centre begins with making sound choices based on a holistic perspective at the outset; taking cabling infrastructure, power and cooling into account when designing and specifying. For existing data centres affordable steps can be made individually such as monitoring and managing power using intelligent power distribution, decommissioning unused servers that still consume power, improving cable management and removing abandoned cabling blocking critical airflow. In any data centre, a programme of continuous improvement should be created to achieve long term cost and green benefits; even from the smallest of steps, impressive reductions may be gained and the impact over time can be significant.