The TCO Trap

By Zahl Limbuwala, CEO, Romonet.

  • Monday, 13th May 2013 Posted 11 years ago in by Phil Alsop

As one of a business’s major investments, any datacentre must be able to justify the value it brings to the organisation. Whether building a datacentre from scratch or seeking to invest in and optimise an existing location, businesses must be able to calculate factors such as the Total Cost of Ownership (TCO) in order to ensure that their decisions make sound financial sense. However, there are some common pitfalls that organisations can fall into that make these comparisons harder.


Measurement vs. Prediction
The first trap many businesses fall into is assuming that historical measurements will allow them to calculate TCO and manage their costs - using past examples of PUE, equipment refresh rates and other factors. However, such information is of little value in a vacuum; without a target to compare it with, businesses will have little idea of how their datacentre is performing and whether they need to adapt their operations. As a comparison, imagine driving a car without knowing the local speed limit. You can tell your speed from the speedometer and even use other information such as fuel efficiency, tyre wear or engine temperature to get a better idea of performance. Yet without knowing the speed limit, formed on an entirely different set of calculations, you would still be utterly ignorant of whether your driving was likely to result in a fine or worse.


Similarly, businesses should first predict their TCO using the appropriate tools, calculations and information, such as the expected energy use of each piece of equipment in a datacentre. They can then evaluate their historical measurements against these predictions in order to see that they are meeting expectations and act accordingly. However, when making these calculations businesses can fall victim to the second trap.


Hard Calculations:
This is using tools that cannot cope with the complexity involved in these calculations. While nobody would use a notepad, pen and pocket calculator to predict their TCO, a lack of specialist tools means that businesses will still fall back on tried and tested methods such as spreadsheets. However, while spreadsheets are a fantastic multi-purpose tool that the world of business still relies upon, they are still not adequate for the task of predicting something as complex as datacentre TCO. This was borne out by recent research from the University of Hawaii, which found that 88% of spreadsheets, and at least 1% of all formula cells, contained errors. While this 1% may seem a small amount, the effects of these errors can increase exponentially. A startling example is the discovery that a Harvard economics paper, used by many as justification of governments’ economic strategies, had omitted certain cells in its calculations. Together with other issues this meant that the very principle of the study was at best shaky, and at worse reversed.


The Cost of Complexity:
While datacentre managers’ spreadsheets are unlikely to damage national economies, there are still issues of complexity and often compromise that can greatly reduce the accuracy of TCO calculations. To begin with, the data that needs to be brought together to make calculations exist in various domains and are recorded in differing units and dimensions. Kilowatts, Kilowatt-hours, Power Usage Effectiveness, square feet or meters, BTU, degrees Celsius or Fahrenheit, Capex, Opex, time and date are just some of the units and measurements that need to be reconciled in order to understand the true cost of a datacentre. With such multi-dimensional, complex calculations any slight error will result in a greatly skewed TCO estimate; regardless of how time-consuming the actual equations can be.


When faced with such a complex task, it is natural that many will attempt to simplify it. The most common way of doing this is by attempting to simplify those areas that are too multifaceted to analyse or understand in detail even with spreadsheets. This in turn results in less accurate TCO calculations simply due to the final figure being based on incomplete inputs. Since those creating and maintaining these spreadsheets will be aware of this, they will include error margins in their calculations; essentially settling for a final figure that they deem to be “close enough”. However, a single TCO calculation could involve a combination of calculations and inputs from a variety of teams and sources. This in turn means that each of those calculations may have been based on the same “close enough” approach. As a result, any final figure for TCO will be based on a series of estimates across a number of spreadsheets, each of which has a high chance of containing errors in what may be critical cells. Once data has been fed across these spreadsheets any system-level behaviours or anomalies are near-impossible to predict.


Essentially, businesses need to be sure that they are predicting, as well as measuring, their costs. When doing this, they must be sure that a combination of complexity, human nature and blunt toolsets is not dragging their calculations away from accurate into the realm of “close enough”. Without this, any datacentre can quickly turn from a major source of investment return into a financial black hole.