Data Centre Location Strategy in the Age of AI

By Aashna Puri, Global Strategy and Expansion Director, CyrusOne.

  • Tuesday, 19th August 2025 Posted 3 hours ago in by Phil Alsop

Investment headlines still prominently feature traditional data centre hubs such as Northern Virginia, Frankfurt, and Singapore. Yet beneath these familiar names, data centre developers are navigating a rapidly evolving landscape driven by unprecedented shifts in technology, capital, and regional constraints. The explosive growth of artificial intelligence (AI), combined with enormous capital requirements and complex physical limitations of power and land, is redefining how and where data centres are built, ushering in a dynamic era of growth where uncertainty is not just a challenge, but a fundamental part of strategic decision-making. Location strategy now relies on three forces working together: the workload in scope, the capital required to deliver at speed, and the regional constraints that ultimately decide what is possible.

AI and the Shift from Predictability to Rapid Scale

Traditional data centre development relied on steady, predictable growth of enterprise and cloud workloads. Facilities could be planned with relative confidence, expanding incrementally to match demand. That model alone though no longer works. While enterprise and cloud adoption continue to grow, AI has added a new layer of location criteria driven by the very different needs of training and inference workloads.

Training, used for building large language models and generative AI, requires massive, centralised campuses with extremely high-density compute. These facilities are power-intensive, thermally complex, and increasingly shaped by rapid hardware innovation. AWS, for example, recently reported that its custom Trainium2 chips now offer 30-40% better price-performance than GPUs. Shifts like this can alter power, cooling, and rack density requirements mid-cycle. Because training is far less sensitive to latency, it gravitates towards lower-cost, power abundant markets where hundreds of megawatts can be assembled at a low total cost of ownership.

On the other hand, inference workloads support real-time user interactions and require low-latency delivery close to end users. Here location strategy revolves around major population centres with strong connectivity through dense fibre networks, cloud on-ramps, and local internet exchanges, while still balancing power availability. Regulations and data residency requirements can also dictate where these sites need to be. 

In practice, developers now serve three distinct, sometimes overlapping demands: enterprise workloads in availability zones, large, centralised campuses for training, and distributed infrastructure for inference. Building quickly remains essential, but success depends on being able to adjust capacity and design as workloads change.

Capital: Fuelling the Scale of AI Expansion

To match the scale of AI-driven growth, unprecedented levels of investment are required. Morgan Stanley forecasts global data centre capital expenditures could reach nearly $3 trillion by 2028, a number far exceeding any previous infrastructure cycle, including the late-90s telecom boom.  While hyperscalers like AWS, Google, and Microsoft, will directly fund much of this, private credit will be critical in bridging an estimated $800 billion gap. Debt markets are also expected to triple the size of today’s ABS and CMBS market through $150 billion in bonds backed by data centres and related hardware.

However, this capital comes with expectations. GPUs can lose as much as 30 percent of their value each year, and with rapid hardware turnover, shifting power needs and evolving design standards, investors will favour facilities that can be adapted, upgraded and reconfigured as technology changes. Projects with ready power, land and permits are best placed to deliver returns quickly. As a result, capital is not only enabling the next wave of growth but also influencing which locations rise to the top of the build queue.

Regional Realities: Balancing Land, Power, and Regulation

Beyond technology and capital, there are physical constraints like power availability, and evolving regulatory frameworks that impact where data centres can be built. Each region has distinct strategic challenges and opportunities.

In the United States, where more than half of all new global capacity is expected, demand is being driven by a broad mix of customers and workloads. Hyperscalers are building gigawatt-scale campuses to support AI training at unprecedented scale, while model builders, neocloud platforms, and chipmakers are all competing for power and land infrastructure that can meet their specialised needs. While established markets like Northern Virginia face space and power constraints, this is accelerating growth in secondary markets like Reno, Columbus, and the broader Dallas–Fort Worth region. These locations offer the land, power, and permitting speed needed for rapid, large-scale deployment—reinforcing the US as the centre of gravity for global AI infrastructure.

In Europe, the data centre landscape is evolving through strategic layering of workloads. Growth continues steadily in enterprise cloud adoption, with AI workloads increasingly being integrated into existing facilities. This convergence optimises existing infrastructure, leveraging colocation to balance enterprise and AI demands within the same locations. Demand is also being shaped by regulatory compliance and sovereignty requirements, particularly in sectors like government, finance, and healthcare, where data residency and low latency are critical. As a result, primary markets continue to see the bulk of activity, positioning Europe as a leader in sustainable data centre development. 

Asia, contrastingly, has a hub-and-spoke model: core hubs such as Singapore, Tokyo, and Seoul handle critical workloads, while constrained land and power push new capacity to nearby spokes like Johor, Batam, Osaka and Busan. Further, fragmented sovereignty rules are forcing operators to replicate smaller footprints in each market, resulting in networks of medium scale, low-latency sites over single mega-campuses. Overall, strong local policy and capital support help projects move comparatively quickly, making Asia’s market more nationally segmented than Europe and far more geographically dispersed than the gigawatt-scale developments seen in the US.  

Future-Proofing Location Strategy: Balancing Speed, Capital, and Change

Ultimately, the interplay between rapidly evolving AI technologies, extraordinary capital requirements, and intricate regional constraints calls for a holistic and adaptable approach. Data centre strategies must evolve from predictable, incremental growth models toward flexible frameworks designed to rapidly respond to technological advancements and shifting market conditions.

In the AI-driven future, success for data centre developers will hinge not merely on anticipating growth but on effectively integrating these interconnected dynamics. It will mean building infrastructure capable of meeting demand for centralised, energy-dense requirements of AI training and distributed, latency-sensitivity needs of inference workloads. It will mean selecting strategic locations where capital, regulation, land availability, and power align favourably. Most importantly, it will require embracing uncertainty as an inherent part of strategic decision-making.