FROM THE INDUSTRY
That’s driven not just by latency, but by data sovereignty and security. Around 90% of enterprises have used public cloud services, but many are now reconsidering — they want to bring sensitive workloads back into private or hybrid clouds. The drivers are clear: geopolitics, cybersecurity and cost. The “cloud repatriation” trend is real, and networks have to evolve to support that. Let’s talk about sustainability. With so much new infrastructure, can any of this be considered green? Sustainability is front and centre for us. The elephant in the room is power — how we source it and how much we use. Nokia doesn’t generate power, but we make sure our products consume less of it. Nokia itself has an ESG policy which puts sustainability at the heart of what we do as a strategic pillar and we see it as a competitive differentiator. For example, our in-house silicon — the FP5 chip that powers our large routers — uses 75% less energy than the previous generation. We also design our fixed broadband chips, like Quillion, with the same principles. Fibre networks themselves are far more energy-efficient than copper, so the move to full-fibre broadband is a sustainability win too. Nokia has committed to carbon neutrality by 2040, and we treat ESG as a strategic differentiator. Energy efficiency is now part of every RFP discussion — customers are factoring power costs directly into their procurement decisions.
Nokia’s such a household name but not everyone realises how much it’s evolved over the years. Can you give us a snapshot of the current business?
There’s been a lot of talk about an AI bubble — echoes of the dot-com era. What’s your view? Every technology cycle starts with over- enthusiasm. I would call AI a revolution, not an evolution in networking telecoms terms. Amara’s Law sums it up: we overestimate the short term and underestimate the long term. AI is no different. From our perspective, the underlying infrastructure to support it — networks, data centres, power — is absolutely real. Nokia sees data centre and AI traffic growing at around 25–30% a year, and we project that by 2030 roughly a quarter of all WAN traffic will be AI-related. So no, I don’t think this is a passing fad. It’s a structural transformation, and connectivity is at the heart of it. You mentioned location and power as key issues for data centres, but you also said networks are the “third leg of the stool.” What do you mean by that? When people plan data centres, they always focus on where they can build them and how they’ll power them. But without networks, the data can’t move — and the whole system breaks down. You need all three: location, power and networks. As AI workloads spread geographically — more edge sites, more inference processing — network reach and resilience become just as important as megawatts or land plots. You can’t train or deploy AI models without fast, sovereign, low-latency connections. Are we already seeing this move to edge data centres in practice, or is it still theoretical? It’s starting. Most deployments today are still the big hyperscale data centres, but we’re now seeing planning and investment for edge infrastructure. Over the next few years, as inference workloads take over, we’ll need five to seven times more physical sites to handle them.
Nokia’s structure now rests on four divisions. We’ve got Network
Infrastructure, which covers IP, optics and fixed networks — that’s the backbone of global connectivity. Then there’s Cloud and Network Services, which deals with our software portfolio which includes 5G core. Our Mobile Networks division handles the radio side, 5G radio networks, and ultimately 6G. And finally, Nokia Technologies, which manages our patents and licensing business. We invest around which generates a huge amount of IP — so our licensing arm is a very important part of Nokia’s business. But the three customer-facing divisions — Mobile, Software and Network Infrastructure — are really where we go to market. One of the topics we’re exploring this issue is AI and networks. How is Nokia approaching it?
We look at it through two lenses: “Networks for AI” and “AI for networks.”
Networks for AI is about building the high-capacity, low-latency, ultra-reliable connectivity that AI workloads demand. Right now, we’re supporting large centralised data centres doing huge model training runs, but over time we’ll move toward inference models that will also live at the edge — smaller, more localised data centres or even on-prem systems. Those will require a different kind of network topology: distributed, intelligent and extremely responsive. AI for networks flips it around — that’s embedding intelligence into the network itself so it can self-optimise, predict issues and reduce operational costs. Networks
That home-grown silicon also gives you a security advantage, right?
Exactly. Because we manufacture our own chips, we can embed security features at the hardware level. One example is Deepfield, our DDoS protection platform. It can detect and neutralise an attack in about 30 seconds, before service is affected. Eight of the world’s ten largest internet exchange carriers use Nokia routing gear, and some have the capability to resell that DDoS protection as a white-label security service. It’s a growing part of the market and a good example of how operators can monetise security.
are getting too complex to manage manually; AI is key to making them autonomous.
Volume 47 No.4 DECEMBER 2025
73
Made with FlippingBook - Online magazine maker