FROM THE INDUSTRY
How do you see High- Performance Computing (HPC) and Artificial Intelligence (AI) transforming digital infrastructure over the next five years? HPC and AI are driving a fundamental redesign of data centre architecture. Traditional enterprise and hyperscale environments are no longer sufficient. We’re now facing complete structural overhauls—from compute density to power delivery and thermal management. This shift is not incremental; it redefines how facilities are built and operated. The change has really taken off over the last five years and with HPC and AI forcing a complete overhaul of existing and paving the way for a new era of datacentres. Do you think that the existing data centres are going to have to be either expanded or completely reconfigured on the inside? Both. While compute performance is increasing, the physical footprint of compute hardware is decreasing. However, this higher-density equipment introduces complex requirements for power and cooling that exceed traditional designs. As such, we anticipate a gradual scaling down of specific areas, while others expand to meet new infrastructure needs. A 60kW rack, for instance, carries far more thermal and electrical load than its predecessor—even in the same space. Existing data centres must be upgraded and restructured, while new builds require fresh architectural thinking to support both current HPC/AI workloads and the next wave of technological evolution. Which trend do you think will have a bigger impact on infrastructure design: the rise in power density or the adoption of AI-driven management? These trends are linked and address different aspects of the infrastructure. AI is driving the generation of vast volumes of data, while power density is about the ability to process and store that data efficiently. The industry is moving
containment, and then to rear-door heat exchangers (RDHX), which offered a more targeted and efficient cooling method. RDHX systems support sustainability by allowing higher compute densities without overhauling the entire infrastructure. The next phase includes on-chip cooling and waste heat reuse—for instance, repurposing expelled heat for residential or governmental heating purposes. The most transformative step is immersive cooling, which demands a complete redesign of the compute environment, from tanks and fluids to cable management and monitoring systems, representing a complete ecosystem change. Are traditional air-cooled data centres becoming obsolete? Not obsolete, but limited. Air cooling still serves lower-density or legacy applications, but it struggles with scalability. For modern workloads, sustainable and high-efficiency cooling systems are the future. Rear- door heat exchangers (RDHX), which offered a more targeted and efficient cooling method. RDHX systems support sustainability by allowing higher compute densities without overhauling the entire infrastructure. The next phase includes on-chip cooling and waste heat reuse— for instance, repurposing expelled heat for residential or governmental heating purposes. The most transformative step is immersive cooling, which demands a complete redesign of the compute environment, from tanks and fluids to cable management and monitoring systems, representing a complete ecosystem change. How feasible is it to achieve a Power Usage Effectiveness (PUE) below 1.2 when scaling AI and HPC workloads? Achieving sub-1.2 PUE is ambitious but possible with a comprehensive approach. This includes upgrading legacy technologies, managing oil containment systems and integrating the latest in cooling and power delivery. At Netceed, we offer end-to-end solutions—from connectivity and power to cooling and racks—because building high-performance facilities requires a holistic strategy. In many cases, constructing new facilities with state-of-the-art technology from day one is more efficient than retrofitting older ones.
beyond merely storing data—data centres are now creators of data. This shift necessitates high-capacity, high-power racks, fundamentally altering rack design and deployment. For instance, a 60kW rack has a similar footprint to a traditional rack but demands a radically different structural and cooling design to manage the additional weight and power. What are the biggest infrastructure challenges today when trying to support 30-60kW+ racks? The primary challenge lies in power delivery. Governments and utilities must adapt existing grid infrastructure to support these escalating demands. In regions such as the UK and across Europe, current infrastructure is already under considerable pressure, and this will only intensify as demand continues to grow.
How can this be future- proofed?
That’s what’s caught everybody out. Recent government announcements said that they’re going to focus on investment on data centres in the UK, but within hours there was a comment on the UK’s power infrastructure not being able to compete. The good thing is that the different government bodies and the investment companies behind data centres are starting to understand that they must work together to execute a 5–10-year, 15–20- year strategy. Is the industry doing enough to prepare for these challenges? There’s positive momentum. Governments, regulators, and investors are beginning to think in long-term horizons—five, ten, even twenty years. While the coordination is improving, infrastructure still needs to catch up to demand. How are organisations adapting their cooling strategies for modern HPC and AI deployments? Cooling has evolved significantly. It began with conventional air conditioning, progressed to hot and cold aisle
May 2025 Volume 47 No.2
75
Made with FlippingBook - Online magazine maker