NATHAN TRACY OPTICAL STANDARDS
THE BACKBONE OF AI INFRASTRUCTURE GROWTH OPTICAL INDUSTRY STANDARDS AI’s rapid rise has placed unprecedented demands on data centre networks, interconnect fabrics, and compute systems. With AI workloads defined by massive data movement, compute-intensive training, and latency-sensitive inference, the challenge goes far beyond faster compute chips and accelerators. What is needed is an efficient, standardised infrastructure to move data quickly and reliably between components, says OIF President Nathan Tracy . A t the heart of this infrastructure are industry standards, technical specifications and interfaces that ensure devices from different Reliability standards set expectations for performance margins, error rates, signal integrity, and diagnostics. This ensures systems behave predictably under strain. Standard interfaces allow commodity supply chains, volume manufacturing, and broader vendor competition to drive down system-level costs. optics, where optical components are placed in close thermal and mechanical proximity to switching silicon. This blurs the line between networking and compute, complicating traditional modular approaches.
vendors can communicate seamlessly, reliably, and cost-effectively. Without standards, innovation would be siloed within proprietary systems, leading to fragmentation, vendor lock-in, higher costs, and slower developing or incomplete ecosystems. WHY DO STANDARDS MATTER? AI infrastructure is not just about faster processors. The real performance gains depend on how data moves between accelerators (e.g., GPUs, TPUs, AI ASICs), memory and storage, networking fabric, and remote data centres. This data movement involves optical and electrical interfaces, transceivers, backplanes, cables, and integrated modules. A single AI workload may cascade across dozens of servers, hundreds of switches, and thousands of optical links. Performance also relies on software tools and platforms that create, process, manage, and automate workloads and network operations. Standards matter because they ensure interoperability, enable scalability, guarantee reliability, and lower system-level cost. Interoperability ensures components like pluggable modules and chips from different vendors work together seamlessly. This reduces investment risk and accelerates the deployment of AI infrastructure. Scalability is not just about adding more compute. It is about supporting higher raw bandwidth links, increased bandwidth density and more complex topologies without redesigning systems from scratch.
Fragmented ecosystem without standards
THE CHALLENGES However, establishing standards that meet the dynamic needs of AI is non-trivial. The AI infrastructure has unique demands, which presents several challenges in standardising AI interconnect. Exploding bandwidth needs As AI models balloon in size and distributed training workloads proliferate, networks must handle Terabits per second (Tbps) of data across nodes. Traditional Ethernet or InfiniBand approaches strain under such loads without advancements in physical layer technologies. Power and thermal constraints Higher performance links consume more power and generate heat. Standards must create pathways for efficient, low-power signalling and advanced cooling, especially as devices pack more capability in smaller footprints. Signal integrity at high speeds At data rates beyond 112 and 224 Gbps per lane, maintaining clean signals over copper and optical media is challenging. Electrical noise, crosstalk, and loss become significant hurdles. Co-Packaging and integration trends To reduce latency and increase bandwidth, there is growing emphasis on co-packaged
Without industry-level coordination, proprietary interfaces could proliferate, slowing adoption and limiting multi-vendor ecosystems. Addressing these challenges requires standards that evolve with technology, and that is where organisations like the OIF play a pivotal role. The OIF is an industry body focused on defining implementation agreements (IA) and interoperability specifications for optical, electrical and related management interfaces. COHERENT OPTICS FOR LONG- HAUL AND HIGH-CAPACITY CONNECTIVITY Coherent optical technology uses advanced modulation and digital signal processing (DSP) to push hundreds of gigabits of data per wavelength across long distances. While coherent optics originated in carrier networks, AI data centres, cloud providers, and hyperscalers increasingly leverage coherent links for inter-data centre connectivity, high-capacity aggregation, efficient scaling to 400Gbps, 800Gbps and beyond. OIF has driven the evolution of coherent optics, transforming it from specialised, high-cost, long-haul technology into standardised, pluggable, and interoperable solutions for campus, metro, and data centre interconnect. OIF IAs have facilitated the migration from 100G to 400G and
14
| ISSUE 43 | Q1 2026
www.opticalconnectionsnews.com
Made with FlippingBook flipbook maker