The AI Boom Is Exposing a Delivery Gap

By Brian O’Hare, Service Director at BCS Consultancy.

  • Thursday, 12th March 2026 Posted 16 hours ago in by Phil Alsop

AI is no longer an innovation story. It is an operational stress test for the entire data centre ecosystem, and commissioning has become the point where theoretical design meets operational reality. That stress is most visible in the infrastructure layer that has changed most rapidly: cooling.

Liquid cooling changes everything

Until recently, liquid cooling was viewed as a specialist or experimental technology. Today, with increasing rack densities and thermal loads, direct-to-chip or immersion cooling technologies are indispensable for AI and high-performance computing workloads. So much so that the long-standing principle of never introducing liquids into a live data centre has effectively been overturned. 

Yet the desire to adopt liquid cooling does not automatically translate into delivery readiness. Liquid cooling requires extremely tight thermal and hydraulic control, strict cleanliness standards, and new risk-management approaches. This forces a break from traditional data centre standardisation. 

Data centre design has always reflected different redundancy models and operating strategies, but high-density workloads have pushed that variability into far more exacting territory. Chip manufacturers and server OEMs define strict operating parameters, while cooling vendors apply their own specifications in the absence of fully embedded industry-wide liquid cooling standards. As a result, facilities must now be engineered around specific workloads and cooling architectures from the outset. This increases complexity and makes early-stage design coordination critical to long-term performance.

One of the greatest risks lies in failing to assess operational implications during the design phase. AI and HPC deployments rely on tightly integrated cooling chains, from chillers and pumps through CDUs and distribution systems to the cold plates on the chips themselves. Misalignment at any point compromises performance, resilience and efficiency. Design decisions are no longer theoretical; they determine how the facility will be commissioned, operated and maintained.

Execution capacity is under strain

Commissioning exposes the pressure points first. Liquid-cooled environments demand specialist knowledge, controlled cleanliness standards and equipment that remains in short supply. High-capacity load banks and appropriate heat rejection solutions require advance planning well beyond traditional procurement cycles. Skilled commissioning engineers with hands-on liquid cooling experience remain scarce. 

These shortages are already leading to extended commissioning programmes and the need to pre-book equipment months, or even years, in advance. While manufacturers are increasing production, it remains questionable whether supply can keep pace with the speed of AI-driven design and construction.

The skills gap extends beyond commissioning. AI data centres require professionals who understand mechanical systems, electrical infrastructure, controls logic and IT architecture as an integrated whole. The skill base exists, but is limited, and structured training pathways are not yet keeping pace with demand. Standards bodies such as ASHRAE and BSRIA continue to provide essential guidance, but much of their foundational work is based on air-cooled assumptions. The competency framework must evolve in parallel with the technology.

Operationally, facility management teams inherit a more complex environment than any previous generation of data centre operators. Day-to-day management now includes stringent monitoring of flow rates, pressures, and temperatures. Water chemistry management becomes as critical as electrical redundancy. Response protocols must include new types of failure modes not found in traditional air-cooled sites. Without comprehensive retraining and new operational frameworks, facility management teams will struggle to maintain confidence in these highly complex environments.

Delivery discipline wins

The wider market context reinforces these challenges. BCS Consultancy’s Data Centre Truths 2026 report highlights that 95 percent of respondents expect the availability of skilled professionals to decline while demand rises. At the same time, 85 percent believe existing facilities are not ready for AI-heavy workloads. Delivery capacity now determines competitive position.

The industry often frames AI infrastructure as a race for megawatts. In practice, it is a race for execution capability. Power availability and planning approval are critical, but they no longer guarantee delivery. Projects will only succeed when developers align design intent, specialist resource, commissioning strategy and long-term operational competence from the outset.

Technology is no longer the primary risk. The real risk lies in overestimating organisational readiness to implement it at scale. Those who invest early in skills development, specialist commissioning expertise and operational transformation will shape the next generation of data centres. 

AI has already reset the technical baseline. The market will now test whether the industry can reset its execution model with equal discipline.

Paul Swaddle, Product Manager at Serios Group and Lucy Batley, founder of AI consultancy Traction...
By Brett Candon, VP International at Dropzone AI.
By Russell Gammon, Chief Innovation Officer at Tax Systems.
By Dan Petrillo, VP of Product at BlueVoyant.
By Lorri Janssen-Anessi, Director of External Cybersecurity Assessments, BlueVoyant
By Jorge Monteiro, CEO of Ethiack.