BRAD HAWKINS POWER ARCHITECTURE
THE POWER ARCHITECTURE SHIFT BEHIND AI DATA CENTRES Amphenol Network Solutions Data Centre Systems Engineer Brad Hawkins explores how the rapid rise of AI workloads is exposing fundamental limits in today’s data centre power architectures and how HVDC is emerging as a practical response. A rtificial intelligence is changing the physical limits of data centre design. As AI training and inference push rack power densities far beyond approach megawatt-class densities over the next several years. At those levels, conversion losses and resistive heating are no longer marginal issues. They directly affect capacity, operating cost, and cooling requirements. As densities rise into the tens and hundreds of kilowatts per rack, inefficiency stops being an abstract metric Lower-voltage DC systems improve efficiency by reducing conversion steps, but they introduce higher current. Higher current increases copper usage, cable size, and heat generation. That tradeoff limits their usefulness as power densities rise. HVDC addresses both problems. By rectifying power once at the facility edge
historical norms, power architecture has become a defining constraint. Traditional approaches no longer scale efficiently. In response, high-voltage direct current is moving from evaluation to adoption as a practical foundation for AI-era data centres. HVDC adoption is not driven by novelty. It is driven by physics, economics, and the need to align power, cooling, and optical infrastructure around much higher densities. Three forces explain why this shift is happening now. WHY TRADITIONAL POWER ARCHITECTURES NO LONGER SCALE AI accelerators operate on low-voltage DC power, yet most data centres still rely on AC-based distribution. That mismatch creates inefficiency. Power moves through multiple conversion stages before it reaches the chip, and each step introduces loss and heat. At modest rack densities, operators could absorb those losses. AI changes the equation. According to industry projections, hyperscale AI racks are moving well beyond 50 kW, with greenfield designs expected to reach hundreds of kilowatts and in some cases
and becomes a capacity limiter. Every percentage point lost to conversion or distribution overhead reduces the amount of usable compute that can be deployed within a fixed power envelope. In large AI facilities, this effect compounds quickly. Operators are increasingly constrained not by the cost of electricity, but by access to it. Grid interconnection timelines are stretching, and power availability has become one of the primary gating factors for new builds and expansions. Under those conditions, architectures that waste less power do more than lower operating expenses. They determine how much infrastructure can be activated at all. Non-IT power consumption also takes on new significance at scale. Power conversion losses and thermal overhead translate into megawatts of additional demand across large campuses, adding millions of dollars in annual operating cost and reducing headroom for future growth. As a result, hyperscale operators are re-evaluating architectures that were acceptable at lower densities but impose hidden penalties as AI workloads scale.
and distributing it at 400 VDC or 800 VDC, HVDC reduces conversion stages and lowers current. The result is higher grid-to-chip efficiency, reduced material intensity, and a power architecture that can scale with AI workloads rather than constrain them. HOW STANDARDS ARE TURNING HVDC INTO A DEFAULT CHOICE HVDC adoption gained momentum once hyperscalers and chip manufacturers began aligning around common approaches. Power architecture decisions at the rack level shape entire facilities, and the industry now has clearer direction. NVIDIA’s decision to standardise future rack-scale systems on 800 V HVDC starting in 2027 marked a turning point. As the primary supplier of AI accelerators, NVIDIA’s choices influence server design, power distribution, and supplier roadmaps across the ecosystem By moving AC-to-DC rectification to the data centre edge and distributing HVDC throughout the facility, the company aims to reduce copper usage, free rack space, and support higher densities.
26
| ISSUE 43 | Q1 2026
www.opticalconnectionsnews.com
Made with FlippingBook flipbook maker