BRAD HAWKINS POWER ARCHITECTURE
Hyperscale operators are reinforcing this shift through collaborative standards. The Mt. Diablo project, led by Microsoft and Meta with participation from Google, defines a standardised ±400 VDC architecture that separates power conversion from compute racks. This approach allows operators to scale power and compute independently as densities rise, while simplifying upgrades and layout planning. These initiatives matter because they reduce risk. They also change how operators plan and build. Standardised HVDC architectures allow power systems and compute platforms to evolve on different timelines, rather than forcing tightly coupled upgrades. By moving rectification and conversion away from the rack, operators gain flexibility in layout, cooling integration, and long-term expansion planning. This separation becomes critical as rack densities climb toward levels that strain traditional layouts. Power infrastructure competes directly with compute, networking, and cooling systems for space and service access. Removing bulky conversion hardware from the rack reduces congestion and simplifies maintenance, while allowing higher- density optical and liquid-cooling designs to take precedence. From an investment standpoint, alignment around HVDC standards reduces uncertainty across the supply chain. Equipment manufacturers, integrators, and operators can design for common voltage levels, protection schemes, and distribution models. That alignment lowers integration risk and accelerates deployment timelines, particularly in large greenfield AI facilities where delays in power delivery can stall entire projects.
As a result, HVDC is no longer viewed as an experimental alternative. It is increasingly treated as an enabling layer for high-density AI infrastructure, shaping facility design decisions well before construction begins. POWER, COOLING, AND OPTICS ARE NO LONGER SEPARATE DECISIONS HVDC is not just a power decision. It changes how data centres integrate power delivery, cooling, and optical infrastructure. Higher rack densities drive higher fibre density. AI clusters rely on massive east-west traffic, low latency, and high- speed optical interconnects. When power conversion hardware moves out of the rack, designers gain space and flexibility for compute and networking equipment. That shift supports higher port counts, denser fibre routing, and more modular layouts. At the same time, thermal demands rise sharply. Air cooling alone cannot support AI-era rack densities. Liquid cooling has moved from optional to necessary in many deployments. HVDC complements this transition by reducing electrical losses and heat generation in the power distribution system itself. In some cases, technologies proven in other HVDC-heavy industries, such as electric vehicle charging, inform designs for liquid-cooled busways and distribution components. These changes force closer coordination between electrical, thermal, and optical design. Power pathways influence where fibre frames sit. Cooling strategies affect cable routing and service access. As a result, data centre design is becoming more integrated. Teams can no longer optimise power, cooling, or optics in isolation.
WHAT THIS SHIFT MEANS FOR DATA CENTRES HVDC will not replace AC everywhere or all at once. AC will remain dominant in many colocation, enterprise, and edge environments, especially where legacy infrastructure and mixed workloads prevail. However, in hyperscale and AI-focused greenfield builds, HVDC is positioned to capture a growing share of deployments through the end of the decade. More importantly, HVDC signals a broader shift in how the industry approaches infrastructure. Power architecture now shapes facility layout, optical density, cooling strategy, and long-term scalability. As AI workloads continue to grow, data centres must evolve as integrated systems rather than collections of independent subsystems. HVDC reflects that evolution. It provides a practical response to the physical limits exposed by AI, while aligning power delivery with the realities of high-density compute and optical connectivity. For the AI era, that alignment is no longer optional. It’s fundamental.
Brad Hawkins, Data Centre Systems Engineer, Amphenol Network Solutions
www.opticalconnectionsnews.com
27
ISSUE 43 | Q1 2026
Made with FlippingBook flipbook maker