Artificial Intelligence (AI) has quickly moved from experimentation to implementation, and it’s bringing a very different set of demands into the data centre. These aren’t the same workloads that data centres were designed around ten years ago - or even five.
Whether it’s model training, inference, or real-time decision-making at the edge, AI’s infrastructure footprint is larger, hotter, and volatile – and is changing what data centre operators need to prioritise. Cooling systems are being pushed closer to their limits, power delivery must handle unexpected peaks and commissioning processes have to adapt to completely new thermal and electrical profiles.
As such, facilities teams, consultants and developers are now redesigning facilities - not to meet average load, but to manage unpredictability, density and long-term flexibility.
AI workloads are raising the bar on every system
The hardware that supports AI - particularly graphics processing unit (GPU) servers and AI accelerators - requires far more from its environment. While traditional racks may operate comfortably at 10kW–15kW, AI zones are now routinely designed for 30kW, 50kW, or even 100kW per rack. That level of density places very different expectations on upstream infrastructure: power distribution units (PDUs), switchgear, backup systems and thermal management all need to be scaled accordingly.
What’s more, AI workloads don’t run in stable cycles. Inference jobs, for example, might be continuously active across a fleet of systems, while training workloads spike hard and fast, often running at full capacity for hours or days at a time. This volatile profile breaks many of the assumptions that traditional enterprise workloads were based on and makes responsiveness more important than raw capacity.
Cooling systems are under scrutiny
Thermal management has always been important, but it’s now one of the defining challenges of AI infrastructure. Conventional air-based cooling systems - still standard in many data centres - struggle to cope with concentrated hotspots produced by dense GPU clusters.
In response, many operators are trialling or deploying liquid cooling. Direct-to-chip cooling is being rolled out in retrofit scenarios, while immersion cooling is gaining ground in highly specialised environments. Both require a step-change in engineering,
particularly when it comes to leak detection, heat exchange, fluid compatibility, and integration with existing heating, ventilation, and air conditioning (HVAC) systems.
Many facilities are exploring hybrid approaches, including rear-door heat exchangers or aisle-level containment, to buy time and defer costly retrofits. But these are increasingly viewed as temporary solutions. The direction of travel is clear: cooling must evolve in parallel with compute.
Commissioning is becoming more sophisticated
Ultimately, it’s not enough to design for higher density and dynamic load. Facilities must prove, before go-live, that systems can perform safely and efficiently under those conditions. This is changing how commissioning is carried out.
Simulation and modelling tools are now standard in pre-commissioning phases. Digital twins are being used to visualise airflow, predict thermal hotspots, and test redundancy scenarios. Power simulations are also becoming more granular, modelling not just aggregate draw but transient peaks and fault recovery behaviour.
On-site testing is evolving too. Engineers are validating cooling system responsiveness under partial and full AI loads, not just under steady conditions. Redundancy switching, uninterruptible power supply (UPS) engagement, and fluid flow monitoring are all tested under more realistic, and often more stressful, conditions than before.
Commissioning has always been about managing risk, but with AI’s infrastructure footprint changing so fast, it’s now also about being future-ready. Systems need to be tested not only for today’s load, but for the more complex mix of applications and equipment that is likely to follow.
Power is the new constraint
As AI drives up energy demand, power availability has become a critical factor in facility planning. In parts of the UK, securing a new grid connection can take three to five years, with no guarantee of sufficient capacity. That reality is forcing difficult decisions - not just about location, but about how power is managed and supplemented.
Operators are increasingly exploring on-site generation, including natural gas turbines and hydrogen-ready systems. Battery energy storage is also being used to smooth load profiles and provide resilience during transitions. Facilities are being built in phased blocks, with infrastructure ready to scale once additional power becomes available. In some cases, where an eventual large-scale campus is planned, on-site nuclear power is considered where a small modular reactor (SMR) could be deployed.
Power strategy and cooling strategy are now tightly linked. Liquid cooling systems depend on stable, consistent energy delivery. A short interruption can cause fluid circulation to stall, resulting in rapid temperature rises that leave little room for manual
intervention. In AI-heavy environments, this coupling means power architecture must be designed with thermal responsiveness in mind.
Heat reuse is moving from theory to practice
One consequence of AI’s thermal load is that waste heat is becoming more visible - and more valuable. In colder climates, or where urban development surrounds the data centre, there is renewed interest in capturing and reusing expelled heat.
Liquid cooling systems are particularly well-suited to this, as they consolidate heat into a recoverable form. In some facilities, heat exchangers are already transferring thermal energy to neighbouring buildings or feeding into district heating networks. In the UK, planning authorities are beginning to factor this into sustainability assessments, particularly for large developments near residential areas.
Heat reuse isn’t simple. It requires collaboration with local stakeholders, investment in infrastructure, and detailed analysis of usage patterns and compatibility. But as net zero deadlines approach, and as energy efficiency becomes a competitive differentiator, more operators are exploring how to turn thermal output into a resource.
Infrastructure visibility is now essential
With infrastructure systems becoming more integrated, and load profiles becoming less predictable, visibility is no longer a luxury. Data centre operators need a real-time understanding of how their infrastructure is behaving.
This includes traditional telemetry: temperature, humidity, power draw, and equipment status. But increasingly, it also includes cognitive analytics, anomaly detection, and load forecasting. Facilities teams are using AI-powered monitoring systems to help manage the very workloads that are driving these changes - automating responses, preventing failures and optimising efficiency.
Visibility also supports regulation. With reporting standards tightening, especially around energy use and emissions, operators must be able to produce credible, auditable performance data. The right monitoring infrastructure makes that possible.
Preparing for long-term change
AI has accelerated the infrastructure conversation. Decisions that might once have been delayed until the next refresh cycle are now being pulled forward. Operators are revisiting thermal capacity, power distribution, physical layout and monitoring strategy - not because of failure, but because of momentum.
Those who act early are likely to benefit. Retrofitting for liquid cooling or high-density racks is easier when done proactively. Planning for modular power expansion is simpler before grid connections hit their limits. Embedding digital twins and continuous commissioning processes saves time and cost in the long run.
AI is not a passing trend. It’s reshaping how businesses operate and how data centres are built. That doesn’t mean throwing out the foundations of good infrastructure design - but it does mean adapting them to a new reality. Responsiveness, rather than raw scale, will define the next wave of successful facilities.