FROM THE INDUSTRY
For years, the telecommunications industry was told a familiar story: networks were mature, access was commoditised, and the future belonged to software platforms and hyperscale clouds. Capital intensity was framed as a disadvantage. Physical infrastructure was something to be minimised, abstracted, or outsourced. AI has turned computing back into a physical system. It consumes power, generates heat, moves massive volumes of data and depends on silicon, optics, cooling, land, and high-capacity networks. In an environment such as this, it is fair to say infrastructure is the constraint, not a background function.
As a result, the industry is undergoing a structural shift: n From generic facilities to AI-optimised data centres n From best-effort connectivity to deterministic, high-capacity transport n From abstract “cloud regions” to geographically grounded infrastructure We are undergoing a compute transformation; from the virtual to the industrial. And industrial systems are governed by physical bottlenecks, not software abstractions. The AI Data-Centre Boom Is a Network Infrastructure Boom What is often missed in AI discussions is that data centres are only as valuable as the networks that connect them. AI reshapes traffic patterns across the entire stack: n Heavy east–west flows inside and between data centres n Sustained metro and long-haul transport between cores and edges
n Tight latency and synchronisation requirements for inference, automation and machine control This is why access networks, metro fibre, optical transport and interconnection are becoming gating factors for AI deployment. Increasingly across markets, we are seeing that the limiting resource is actually power and fibre, rather than GPUs. Across the US, UK, and Europe, new AI-ready facilities are being planned in lockstep with dedicated metro rings, dark fibre builds and private optical interconnects. In emerging markets, national fibre backbones and landing stations are increasingly prerequisites for attracting AI investment. Facilities with dense fibre reach, route diversity and proximity to aggregation points fill faster and command premium pricing. So much has been written about what AI is capable of and it potential impact, but few have recognised that AI is actually repricing infrastructure as opposed to consuming it. Hyperscalers Are Spending - But They Are Also Bounded Hyperscalers are investing at unprecedented scale. Hundreds of billions of dollars per year are flowing into silicon, power infrastructure, cooling systems, land and network connectivity. Yet scale alone does not equate to dominance over the networks that extend intelligence into the real world. Hyperscalers optimise inside the data centre. Communication service providers control outside the data centre: access networks, aggregation layers, transport infrastructure, rights-of-way, local power relationships and regulatory interfaces. These assets are capital-intensive, slow to replicate and deeply embedded at national scale. More importantly, AI is pushing compute outward. Inference, real-time decisioning and data-localised processing increasingly sit closer to users, enterprises, and machines.
In economics therefore, whoever controls the constraint holds the leverage.
AI Is Forcing a Return to Physical Reality
Modern AI workloads are radically different from the general-purpose computing that shaped the last cloud cycle. Training and inference clusters concentrate enormous compute into dense footprints, pushing power delivery and thermal limits well beyond traditional data-centre designs.
That favours infrastructure already embedded in regional and national footprints.
Volume 48 No.1 MARCH 2026
67
Made with FlippingBook - Online magazine maker