ICT Today Jan/Feb/Mar 2026

It is not just about link speed. With each jump comes a set of challenges that demand deep industry collaboration. High symbol rates and density mean tougher signal integrity problems, much tighter connector specs, more complex optics integration, and new power and thermal realities in both transceivers and cabling. IEEE’s 802.3dj standard for 1.6 Tb ethernet is also pushing the boundaries of materials science. Engineers are exploring new connector designs and photonic integration techniques to maintain signal quality at these speeds. This represents a rethinking of how to move data at scale. It also changes how optical fiber infrastructure, connectors and cable management are designed. MORE FIBER, MORE POWER: THE CRITICAL ROLE OF OPTICAL FIBER DENSITY Optical fiber infrastructure has moved front and center in the AI era—no longer a background player, but a core strategic asset that directly defines capacity, scalability, and efficiency. Structured cabling cannot be an afterthought. It is a risk control mechanism and a performance insurance policy. Cleanliness, documentation, accessible routing, and testing discipline drive outcomes. Co-packaged optics, which integrate optical fiber engines directly into the same silicon package as a processor or switch ASIC, reduces losses by shrinking the electrical distance between switch silicon and light.

Today’s advanced AI clusters set new benchmarks for optical fiber density. It is now common to see deployments that require two, four, even 10 times the optical fiber count of previous generation hyperscale sites. Each new cluster pushes the practical and logistical limits: Clos and fat-tree fabric topologies, coupled with the explosive growth in GPU-to-GPU communications, demand staggering aggregate throughput and ultra-low, deterministic latency. This creates both opportunities and engineering challenges: • Aggregate bandwidth at scale means ever-higher optical fiber counts per rack—often reaching several thousand individual connections. • Rapid cluster scaling requires infrastructure that can expand seamlessly, without the need for disruptive recabling or rework. • Physical realities—from constrained airflow and cooling efficiency, to accessibility for troubleshooting—are now optical fiber-layer considerations. • Proper installation is critical. At 400 and 800 Gbps line rates, there is no slack in the link budget. A dusty ferrule or a kinked patch cord can lower signal quality enough to starve a set of nodes.

Meeting these demands calls for a new toolset: small-form-factor (SFF) connectors in 16, 24, or 32 optical fiber configurations; truly bend-insensitive cables; and modular, highly organized cable management systems designed from the ground up for AI-scale operations. Some state-of-the-art facilities now report more than one million optical fiber terminations on a single site. At this scale, optical infrastructure is not just about connectivity—it is an operational and thermal constraint, an enabler of agility, and a competitive differentiator. MULTI-CORE FIBER OFFERS ENHANCED CAPACITY As the industry grapples with the physical limits of AI infrastructure, multi-core fiber (MCF) has transitioned from an emerging innovation to a foundational technology for the next wave of AI infrastructure. In high-density AI fabrics, where every square millimeter of rack space and every cubic centimeter of pathway is critical, MCF delivers a decisive advantage (Figure 4). By integrating multiple optical fiber cores—often four, eight, or more—into a single strand, it multiplies bandwidth within the same physical footprint. This is a practical solution to today’s optical fiber congestion challenges. Multicore optical fiber also pairs naturally with co-packaged optics that require very high port counts in a small footprint. For teams deploying AI clusters, MCF provides tangible value: • It directly addresses the immense optical fiber count required for terabit-scale GPU-to-GPU fabrics, reducing cabling and mitigating the associated airflow and thermal management issues. • It serves as a critical enabler for the next generation of co-packaged optics (CPO) and photonic chip integration, where ultra-high- density input/output is essential.

to active production deployments. The challenge now is scaling the ecosystem: refining connectorization standards, optimizing termination practices for field deployment, and ensuring the supply chain can meet the accelerating demand. REDUCING LATENCY WITH HOLLOW-CORE FIBER While MCF addresses the challenge of bandwidth density, hollow-core fiber (HCF) tackles the equally critical issue of latency. For the most demanding AI workloads, HCF is proving to be a game-changing technology (Figure 5). In a standard optical fiber strand, light propagates through glass. In hollow core designs, light travels through air guided by a photonic lattice. Light moves faster in air than in glass, so propagation delay falls by up to 40 percent. That might seem incremental until it is multiplied by millions of synchronization events in a training run. The savings translate into less idle time, faster convergence, and greater freedom to spread a cluster across a campus or a metro region without paying a large penalty. Latency can now be optimized at the physical layer, not just through network architecture. Not every optical fiber question involves the exotic. The practical matter of single mode versus multimode still arises on every project. Multimode remains useful in many enterprise environments with short links and tight budgets. At hyperscale, operators prefer to avoid maintaining two optical ecosystems. Single mode wins on reach, supply chain simplicity, and compatibility with coherent optics. The result is a steady consolidation around single-mode for new builds. BRIDGING THE DISTANCE WITH COHERENT OPTICS After exploring the dense, internal fabrics of AI data centers, the next task is to look at the global AI ecosystem. AI workloads are inherently distributed. Whether it is massive training models requiring collaboration across regions, seamless data synchronization between hyperscalers, or delivering instantaneous inference results to a global user base, the demand for high-capacity, low-latency data center interconnect

High core density transmission

Tx1 Tx2 Tx3 Tx4

Rx1 Rx2 Rx3 Rx4

Single core fiber (SCF)

Multi core fiber / cable (MCF)

Connector / Splice

Fan-in Fan-out (FIFO) device

The conversation around MCF has evolved as major hyperscalers have moved beyond successful pilots

FIGURE 4 : A diagram of a high-core-density transmission system using multicore fiber (MCF). Source: AFL

I

I

34

ICT TODAY

January/February/March 2026

35

Made with FlippingBook - Online catalogs