The Visibility Gap That Could Undermine Data Centre Growth

By Rami Jebara, CTO and Co-Founder at Hyperview.

  • Tuesday, 2nd September 2025 Posted 4 hours ago in by Phil Alsop

The data centre industry is bracing for an energy reckoning. By 2030, electricity demand from global data centres is expected to grow by more than 165%, according to Goldman Sachs. In the US alone, the Electric Power Research Institute projects that data centres could consume nearly 9% of national electricity generation within five years, compared to just 4% in 2023.

Artificial intelligence is now the primary force reshaping infrastructure demands. Training and inference workloads are driving increases in compute density, straining cooling systems, and pushing power consumption to levels few data centres were designed to handle. Hyperscalers are re-engineering entire facilities to keep up, while everyone else is racing to avoid being left behind.

As the industry focuses on expansion, a deeper and less visible problem is emerging. Operators lack real-time insight into what is happening inside these environments, and that blind spot is growing faster than capacity itself.

Legacy systems weren’t built for AI-scale complexity

The behaviour of modern data centres has diverged significantly from the environments that legacy Data Centre Infrastructure Management (DCIM) platforms were originally designed to support. AI-driven workloads and modern applications are increasingly decentralised, operate at much higher intensity, and place unpredictable demands on power and cooling infrastructure. As edge and cloud deployments expand, the scope of operational oversight continues to shift and fragment.

Despite this evolution, many operators continue to use on-premise DCIM systems developed for a more static, centrally managed infrastructure model. These platforms often struggle to deliver the speed, interoperability, and forecasting capabilities required to manage today’s hybrid estates effectively.

Their limitations in providing comprehensive visibility, adapting to dynamic workload patterns, and delivering timely operational insight are increasingly at odds with the performance, sustainability, and compliance standards data centres must meet. What may have once been considered a technical limitation now represents a far-reaching operational risk.

The cost of operational guesswork is rising

With each additional kilowatt of AI compute, the pressure on infrastructure intensifies. Although rack densities are increasing, the Uptime Institute’s 2024 survey reports that the average remains below 8 kilowatts per rack. Cabinets exceeding 30 kilowatts are still uncommon across most facilities. Liquid and immersion cooling are becoming essential for higher intensity deployments, but they introduce new environmental and operational complexities. In these conditions, managing infrastructure through static spreadsheets or delayed dashboards is no longer sustainable.

Operators need the ability to monitor, model, and predict performance in real time. This includes clear visibility into where power is being wasted, where cooling systems are overcompensating, and where capacity is underutilised. Without this insight, even well-resourced teams are forced to make costly decisions based on incomplete data.

Overbuilding or overcooling as a safeguard remains a common practice. However, this approach drives up capital expenditure, increases energy consumption, raises carbon emissions, and undermines opportunities to optimise.

Sustainability and compliance demand greater visibility

A lack of precision in infrastructure monitoring is making it more difficult for organisations to meet increasingly strict regulatory and sustainability requirements. Across Europe and North America, data centres must now comply with detailed reporting mandates covering energy use, emissions, and resource efficiency. ESG transparency is now recognised as a core performance measure, not a secondary concern.

Compliance depends on accurate measurement. Without systems that benchmark performance and generate auditable records, data centre teams struggle to keep pace with rising expectations.

In this context, continuous visibility into operational environments has become a valuable asset. With access to current, actionable insights, teams can report more accurately, strengthen internal controls, and improve accountability across the organisation.

AI at the edge demands smarter operations at the core

AI-driven decentralisation is leading operators to manage an increasing number of smaller, distributed facilities, each facing its own cooling, power, and network challenges. Attempting to oversee these environments individually is no longer practical, which is why centralised, cloud-based DCIM platforms with built-in intelligence are quickly becoming standard practice.

Today’s infrastructure management tools must go beyond basic asset tracking. They need to anticipate problems, adapt to changing conditions, and support long-term operational resilience under growing pressure.

Capabilities such as anomaly detection, risk modelling, live heat mapping, and capacity forecasting enable operators to transform infrastructure data into meaningful, real-time insights. With broader visibility and stronger control, teams are better positioned to make faster, more informed decisions across complex environments.

A new baseline for modernisation

The industry’s default response to rising pressure has often been physical expansion. However, in an environment shaped by energy constraints, supply chain challenges, and increasing scrutiny around sustainability, this approach is becoming less viable.

Operators positioned to thrive in this changing landscape are those who can maintain visibility, act with speed, and make efficient scaling decisions grounded in real operational insight. Achieving this begins with modernising the less visible layers of infrastructure management.

AI continues to drive significant innovation across the sector and is also revealing critical gaps in how infrastructure is monitored, managed, and planned. The challenge is not a lack of investment in new capacity, but a lack of control over how existing resources are operated.

Meeting the demands of AI-driven computing will require more than additional power. It will depend on better precision and deeper insight across every layer of infrastructure.