ICTToday Volume 46, Issue 3 | July/August/September 2025

However, successful deployment requires careful planning across infrastructure, operations, and safety systems. Many of the considerations outlined in this article remain vendor-specific and lack standardized, widely adopted solutions. As with any emerging technology, early adoption and pilot deployments are key to shaping best practices and identifying optimal design frameworks. As the liquid cooling ecosystem evolves, new standardized and scalable solutions will emerge, and TIA will incorporate these best practices in future editions of the TIA-942 standard. In the meantime, AI data center operators must strategically weigh trade- offs to ensure they are positioned to adopt and benefit from more resilient, efficient, and cost-effective technologies. AUTHOR BIOGRAPHIES: Mike Connaughton is currently Senior Product Manager for Leviton Network Solutions and has 30+ years of experience with optical fiber cabling. He is responsible for strategic data center planning, technical account support, and alliances. Mike received his BSEET degree from Wentworth Institute of Technology (Boston, MA) in 1990 and has been involved in optical fiber cable engineering ever since. He can be reached at mike.connaughton@leviton.com. Jacques Fluet has more than 30 years of experience in telecommunications, including leadership roles at Nortel and Ericsson. He has extensive experience in global product introduction projects, leading diverse teams in product development, verification, and customer trials. As TIA's former Director of Data Center Program, Jacques contributed to technology programs related to 5G, service assurance, smart buildings, and data centers. He can be reached at jfluet@tiaonline.org. SOURCES: 1. Hyperscale Computing Market Size, Share & Trends Analysis Report 2023 – 2030 , Grand View Research 2. Tobias Mann, More than a third of enterprise data centers expect to deploy liquid cooling by 2026 , The Register

Redundancy configurations vary by workload criticality. Inference workloads—latency-sensitive and customer-facing—typically use 2N configurations with fully mirrored power and cooling paths. Training workloads may adopt N+1 setups, balancing fault tolerance with cost efficiency. In high-density AI training clusters, where heat flux is extreme and thermal transients can exceed 1°C per second, even brief cooling disruptions can degrade performance or damage equipment. To improve both resilience and operational flexibility, many AI data centers are adopting hybrid cooling strategies and modular infrastructure. Hybrid deployments—such as RDHx paired with split air/ liquid systems—allow for maintenance without interrupting workloads. Modular CDUs, immersion tanks, and piping loops support scalable growth while integrating redundancy at the component level. Together, these strategies help liquid-cooled environments achieve power usage effectiveness (PUE) ratings as low as 1.05+ – 1.07+, well below the average of 1.5+ for air-cooled systems. In addition to rack-level planning, liquid cooling influences broader aspects of data center design. Higher power densities may require increased capacity from primary power sources and distribution paths. Heat rejection infrastructure—such as dry coolers or heat exchangers—may require additional external space. During system failover events, rapid temperature rise in the technology cooling system (TCS) requires active redundancy to ensure a safe, seamless transition. CONCLUSION Liquid cooling enables AI data centers to support increasing power densities driven by GPU-based workloads, while reducing energy use for thermal management, extending equipment lifecycles, and advancing sustainability and regulatory goals.

I

I

18

ICT TODAY

July/August/September 2025

19

Made with FlippingBook - Online catalogs