Titled "The AI Disruption: Challenges and Guidance for Data Centre Design," this groundbreaking document provides invaluable insights and acts as a comprehensive blueprint for organisations seeking to leverage AI to its fullest potential within their data centres, including a forward-looking view of emerging technologies to support high density AI clusters in the future.
Artificial Intelligence disruption has brought about significant changes and challenges in data centre design and operation. As AI applications have become more prevalent and impactful on industry sectors ranging from healthcare and finance to manufacturing, transportation and entertainment, so too has the demand for processing power. Data centres must adapt to meet the evolving power needs of AI-driven applications effectively.
Pioneering the Future of Data Centre Design
AI workloads are projected to grow at a compound annual growth rate (CAGR) of 26-36% by 2028, leading to increased power demand within existing and new data centres. Servicing this projected energy demand involves several key considerations outlined in the White Paper, which addresses the four physical infrastructure categories – power, cooling, racks and software tools. White Paper 110 is available for download here.
In an era where AI is reshaping industries and redefining competitiveness, Schneider Electric’s latest white paper paves the way for businesses to design data centres that are not just capable of supporting AI, but fully optimised for it. The white paper introduces innovative concepts and best practices, positioning Schneider Electric as a frontrunner in the evolution of data centre infrastructure.
“As AI continues to advance, it places unique demands on data centre design and management. To address these challenges, it’s important to consider several key attributes and trends of AI workloads that impact both new and existing data centres,” said Pankaj Sharma, Executive Vice President, Secure Power Division and Data Centre Business at Schneider Electric. “AI applications, especially training
clusters, are highly compute-intensive and require large amounts of processing power provided by GPUs or specialised AI accelerators. This puts a significant strain on the power and cooling infrastructure of data centres. And as energy costs rise and environmental concerns grow, data centres must focus on energy-efficient hardware, such as high-efficiency power and cooling systems, and renewable power sources to help reduce operational costs and carbon footprint.”
This new blueprint for organisations seeking to leverage AI to its fullest potential within their data centres, has received welcome support from customers.
“The AI market is fast-growing and we believe it will become a fundamental technology for enterprises to unlock outcomes faster and significantly improve productivity,” said Evan Sparks, chief product officer for Artificial Intelligence, at Hewlett Packard Enterprise. “As AI becomes a dominant workload in the data centre, organisations need to start thinking intentionally about designing a full stack to solve their AI problems. We are already seeing massive demand for AI compute accelerators, but balancing this with the right level of fabric and storage and enabling this scale requires well-designed software platforms. To address this, enterprises should look to solutions such as specialised machine learning development and data management software that provide visibility into data usage and ensure data is safe and reliable before deploying. Together with implementing end-to-end data centre solutions that are designed to deliver sustainable computing, we can enable our customers to successfully design and deploy AI, and do so responsibly.”
Unlocking the Full Potential of AI
Schneider Electric's AI-Ready Data Centre Guide explores the critical intersections of AI and data centre infrastructure, addressing key considerations such as:
● Guidance on the four key AI attributes and trends that underpin physical infrastructure challenges in power, cooling, racks and software management.
● Recommendations for assessing and supporting the extreme rack power densities of AI training servers.
● Guidance for achieving a successful transition from air cooling to liquid cooling to support the growing thermal design power (TDP) of AI workloads.
● Proposed rack specifications to better accommodate AI servers that require high power, cooling manifolds and piping, and large number of network cables.
● Guidance on using data centre infrastructure management (DCIM), electrical power management system (EPMS) and building management system (BMS) software for creating digital twins of the data centre, operations and asset management.
● Future outlook of emerging technologies and design approaches to help address AI evolution.