ICT Today Jan/Feb/Mar 2026

the 802.3 Ethernet Infrastructure standard in both ubiquity and scale of deployment. It also uses RDMA. Its predecessor (RoCEv1) struggled with packet loss and congestion, which gave IB a “first round knockdown” when it came to overall performance. 4 Now comes the second round. Think of UET as a modern traffic system. Sometimes traffic flows nicely, sometimes there is bumper-to-bumper traffic. UET technology helps control speed and throughput, allowing some data loss to occur, but has two fast mechanisms to fix it: Explicit congestion notification (ECN) and link layer recovery (LLR), which acts as a rescue mechanism for packets. If a packet gets dropped, the data link layer (e.g., OSI Layer 2) immediately re-sends that exact packet without querying the entire OSI Stack. 5 UET’s main benefit comes from a competitive, multi-vendor ecosystem. This means the availability of parts, pieces, and vendors is much wider, resulting in a lower total cost of ownership (TCO), and could even simplify integration into existing data centers. UET also brings new features to the table. LLR provides faster error recovery and couples nicely with packet spraying for better load balancing and congestion control. This is beneficial because it efficiently utilizes all available links and adapts to burst congestion almost instantly. Packet spraying is also an advantage. This technology splits up a single stream (or traffic flow) into different paths and ensures they arrive at the same time. In other words, if route 1 has a traffic jam, the technology will utilize all available routes to make sure traffic continues to flow as efficiently as possible. The intent is towards equal utilization of all available routes, thus increasing efficiency. THE SHOWDOWN Each technology has its strengths and weaknesses. Now, we bring them together to face off using performance and congestion control as the benchmark for success. IB performs for hyperscale AI training clusters (e.g., large language models (LLM) and generative pre-trained transformers (GPT). These environments involve thousands of graphics processing units (GPU) that rely on continuous synchronization. IB’s “lossless

by design” nature coupled with its deterministic nature drastically improves GPU utilization and reduces the overall time it takes to train the model. 3, 4 The next hit comes from supercomputing environments that run complex scientific simulations such as molecular dynamics and weather modeling. These dedicated, single tenant, high-performance scenarios are more than justified by the mission critical nature of the work being performed. And to finish the combo, certain deployments that require the highest bandwidth and lowest latency, called scale-up architecture, that exist within a single, homogenous cluster are sure to succeed with IB. In terms of congestion control, IB’s credit-based flow control keeps things civil. No unruly retries. No unnecessary retransmits. Its deterministic, zero packet loss architecture guarantees performance.

it is close. Around 2 microseconds is the goal. The idea is to match or exceed IB by optimizing the entire OSI stack for both high-frequency communication and bulk data movement. 1 In short, UET aims to use what is already available to yet another level of service excellence.

control) readily available and accessible on an already solid Ethernet foundation. This is certainly not the end of the road for our Ethernet contender.

Round two goes to Ultra Ethernet.

IB uses ECMP, which is great for single path, concise, single-tenant communications. This is why extreme scale HPC and scale-up architecture shine using IB. These environments require dedicated, single-tenant, high bandwidth, and low latency within a homogenous compute cluster. UET, however, has packet spraying in its back pocket to break apart single data streams into multiple, diverse routes simultaneously. Unlike IB, this ultimately maximizes bandwidth utilization and adapts quickly to congestion, ensuring that the data flows effectively, efficiently, and reliably. Thus,

AUTHOR BIOGRAPHY: Justin W. Hobbs, RCDD, TECH, is a distinguished physical network architect and author with nearly three decades of extensive experience spanning government, military, education, data centers, aviation, and healthcare. He leverages a deep understanding of real-world applications, published standards, and industry best practices to advocate for the awareness and acknowledgment of Information Transport Systems (ITS), Layer 1, and holistic ICT. Mr. Hobbs holds a Bachelor of Science, summa cum laude, from the University of North Carolina at Greensboro, complemented by several prestigious industry certifications from BICSI and AAAE. Justin can be contacted at jhobbs4007@gmail.com. REFERENCES 1. Ultra Ethernet Consortium. (2023, October). Overview of and motivation for the forthcoming Ultra Ethernet Consortium specification . Ultra Ethernet Consortium. 2. Cisco Blogs. (2025). Ultra Ethernet for scalable AI network deployment . Cisco. 3. Mellanox Technologies. (2014, March). InfiniBand credit-based link-layer flow-control [Conference presentation]. IEEE 802. 4. LINK-PP. (2025). InfiniBand vs RoCE: The network fabric behind the AI data center revolution . LINK-PP. 5. VIAVI Solutions. (2025, August 13). Inside UE 1.0: What Ultra Ethernet means for AI and HPC networks . VIAVI Perspectives. 6. Vitex. (2025, September 9). InfiniBand vs Ethernet for AI clusters: Effective GPU networks in 2025 . Vitex.

As far as latency, IB is ultra-low, about 1-2 microseconds. For small packets and

synchronization workloads, this is your go-to solution. But there are a couple of weaknesses: multipathing and propriety. IB’s proprietary nature is a downside. Similarly to how IBM Type-1, Type-2, and even token ring was in the early days of networking. The downside to proprietary technology is that it typically garners higher costs for procurement and deployment, and risks vendor lock-in. All of which results in an overall higher TCO. Which, in a capitalist economy, is what a company wants. They want your business both now and into perpetuity. Multipathing comes shortly.

it is best suited for cloud and enterprise AI deployments, hybrid deployments, scalable

disaggregated architecture, and even AI inference workloads. 5 It is also quite practical and cost-effective.

THE RESULTS ARE IN As the high-performance compute landscape continues to evolve, consumers are looking at raw performance and cost when making decisions. When it comes to raw performance for single-tenant systems, IB wins the day. Its inherent “lossless” design ensures the most mission-critical communication tracts remain pristine. In other words, the world’s largest AI superclusters should plan to continue counting on this technology. However, ultra Ethernet represents the inevitable future of deployments for AI and HPC. It does so by successfully navigating the performance features of RDMA and the necessary components of congestion control into an open, scalable, and cost-effective Ethernet foundation. In other words, if you do not need a Ferrari to go down the road to the grocery store, UET should be considered. UET is gaining ground. It is making high- performance features (i.e., RDMA, congestion

The round one winner is InfiniBand.

UET strikes back with an optimal blend of high performance, cost efficiency, and flexibility against IB’s congestion control with ECN and LLR. These technologies help control the flow of information and throttle it proactively. 6 ECN tells the sender to “slow down a bit” if things get too busy and LLR kicks in if a packet gets lost in transmission, which allows for fast, granular recovery without holding up the works. Think of it as a magical lift that instantaneously clears the wreck on the interstate thus allowing traffic to continue. UET’s latency might not yet be on-par with IB, but

I

I

42

ICT TODAY

January/February/March 2026

43

Made with FlippingBook - Online catalogs