It is a common misconception that data centers are responsible for consuming vast amounts of the world’s energy reserves. The truth is that data centers are leading the charge to become carbon free and are supporting members of their supply chains in achieving the same.
Even in the face of rapid digital acceleration, the vast proliferation of smart devices and an upward surge in demand for data, the data center sector remains a force for positive change on climate action and is well on course to fulfil its ‘green evolution’ masterplan.
To be more specific, data centers are estimated to consume between one and two percent of the world’s electricity according to the United States Data Center Energy Usage Report (1). A recent study confirmed that, while data centers’ computing output jumped six-fold from 2010 to 2018, their energy consumption rose only six percent (2).
To better envisage just how energy efficient data processing has become, imagine that if the airline industry was able to demonstrate the same level of efficiency, a typical 747 passenger plane would be able to fly from New York to London on just 2.8 liters of fuel in around eight minutes.
How has the data center sector reduced total energy consumption?
Using a range of safe, smart and sustainable solutions, ABB is helping its data center customers to reduce CO2 emissions by at least 100 megatons until 2030. That is equivalent to the annual emissions of 30 million combustion engine cars.
Solutions include more energy efficient power systems innovations or even moving entirely to large scale battery energy storage systems which ensure reliable grid connectivity in case of prolonged periods of power loss.
These are obviously big budget changes, with impressive yields, but there is much more that can be done to harness the power of digitalization on a smaller scale.
Making every watt count
The need for additional data from society and industry shows no sign of stopping, and it is the job of the data center to meet this increased demand, without consuming significantly more energy. Unlike many industries which wait for regulation before forcing change, the desire to offer a more environmentally conscious data center comes from within the industry, with many big players and smaller facilities too, taking an “every watt counts” approach to operational efficiency.
By digitalizing data center operations, data center managers can react to increased demand without incurring significant additional emissions. Running data centers at higher temperatures, switching to frequency drives instead of dampers to control fan loads, adopting the improved efficiency of modern UPS and using virtualization to reduce the number of underutilized servers, are all strong approaches to improve data center operational efficiency.
To understand this further, let us explore some key sustainable growth tactics for delivering power and cost savings for green data centers:
Digitalization of data centers
One of the most recent developments has been the implementation of digital-enabled electrical infrastructure. Data center operators can take advantage of techniques to make their equipment more visible, efficient and safer. One development has been the use of sensors instead of traditional instrument transformers, which communicate digitally via fiber optic cables, reducing total number of cables by up to 90% vs traditional analog, and also use low energy circuits, which increases safety.
The resultant digital switchgear can then be manufactured, commissioned and repaired much more easily thanks to both the sheer number of cables and the intelligent nature of the connections. Other innovations allow circuit protective devices to be configured wirelessly, and even change their settings when alternate power sources are connected. Visibility into the electrical consumption is much easier with digital signals, and analytics are enabled with this “democratization” of the data stream. From this, insights can be gained to both increase efficiency, and tailor consumption based on specific business goals.
Minimizing idle IT equipment
There are several ways data centers can minimize idle IT equipment. One popular course of action is distributed computing, which links computers together as if they were a single machine. Essentially, by scaling-up the number of data centers that work together, operators can increase their processing power, thereby reducing or eliminating the need for separate facilities for specific applications.
Virtualization of servers and storage
Undergoing a program of virtualization can significantly improve the utilization of hardware, enabling a reduction in the number of power-consuming services and storage devices. In fact, it can even improve server usage by around 40 percent, increasing it from an average of 10 to 20 percent to at least 50 or 60 percent.
A server cannot tell the difference between physical storage and virtual storage, so it directs information to virtualized areas in the same way. In other words, this process allows for more information storage, but without the need for physical, energy consuming equipment. More storage space means a more efficient server, which saves money and reduces the need for further physical server equipment.
Consolidating servers, storage, and data centers
Blade servers can help drive consolidation as they provide more processing output per unit of power consumed. Consolidating storage provides another opportunity, which improves memory utilization while reducing power consumption. Some consolidation methods can use up to 90% less power once fully operational (3).
Big savings are also coming from moving to solid state disc drives (SSD) from traditional optical drives (HDD). While a bit more expensive, they’re much smaller and energy efficient and can be done during an IT “refresh” cycle every 3-5 years or so.
Managing CPU power usage
More than 50 percent of the power required to run a server is used by its central processing unit (CPU). Most CPUs have power-management features that optimize power consumption by dynamically switching among multiple performance states based on utilization.
By dynamically ratcheting down processor voltage and frequency outside of peak performance tasks, the CPU can minimize energy waste.
Distribution of power at different voltages
Virtually all IT equipment is designed to work with input power voltages ranging from 100V to 240V AC, in accordance with global standards and the general rule is of course, the higher the voltage, the more efficient the unit.
That said, by operating a UPS at 240/415V three-phase four wire output power, a server can be fed directly, and an incremental two percent reduction in facility energy can be achieved.
Adopting best cooling practices
Traditional air-cooling systems have proven very effective at maintaining a safe, controlled environment at rack densities of two kW to three kW per rack, all the way to 25 kW per rack. But operators are now aspiring to create an environment that can support densities in excess of 30-50 kW as demand for AI and Machine Learning increases, and at these levels, air cooling technologies are no longer effective.
That shouldn’t be seen as a barrier though, with alternate cooling systems such as rear door heat exchangers, providing a suitable solution.
Plugging into the smart grid
Smart grids enable two-way energy and information flows to create an automated and distributed power delivery network. Data center operators can not only draw clean power from the grid, they can also install renewable power generators within their facility in order to become an occasional power supplier back into the grid.
1 United States Data Center Energy Usage Report | LBL Sustainable Energy Systems
2 Cloud Computing Is Not the Energy Hog That Had Been Feared - The New York Times (nytimes.com)
3 Servers: Database consolidation on Dell PowerEdge R910 servers