22565 - SCTE Broadband - May2026 COMPLETE v2

FROM THE INDUSTRY

Health and Safety at Work has been driven by a simple moral imperative: people should not be injured or killed for earning a living (or because of someone else’s work).

learning or scripting – we sometimes focus on the wrong things.

embedded security outcomes have more in common with safe systems of work than we sometimes like to admit. Shared learning sits at the heart of H&S. The discipline has spent decades building cultures of openness, incident learning and sector wide improvement; engineers arriving in the telecoms world are expected to carry a set of embedded norms that the industry as a whole has shaped over decades. The SHiFT (Safety and Health in Fibre Telecoms) group is an attempt to drive this forward formally in fibre, learning from other sectors where this has long been established practice. Cyber, driven by hostile actors and constant change, is catching up rapidly through information sharing networks, threat intelligence communities and collaborative frameworks – but these are still siloed and the barrier to entry is higher than it can be. The instinct to hoard information is diminishing across both fields, and for good reason. New technologies, rapid responses and the regulatory gap: What H&S can learn from cyber Perhaps the most striking difference between the disciplines is pace. Cyber has long accepted that technology outruns regulation. As a result, it has developed a culture of rapid response, adaptive controls and international collaboration (albeit fragmented within a plethora of competing associations). Cyber does not wait for legislation to catch up – it cannot afford to (although this has it’s own challenges). H&S, by contrast, still leans on a regulatory foundation that has barely shifted since the turn of the millenium, despite dramatic changes in how work is designed and delivered. As AI, automation and digital integration reshape physical environments, H&S cannot rely on legislation alone to keep people safe. cyber’s example is clear: define “what good looks like” ahead of regulators – build leading practice and move even when the rulebook has not. But this approach also provides important learnings from the pitfalls of this approach; cyber is still less outcome-focused and still largely focused on managing the technical jargon the business adopts rather than understanding what risks are truly at play. How much AI and automation is machine

Cyber is rapidly working out how to manage risks. This started in telecoms with the Telecoms Security Act setting a 7 year improvement programme in train – and ultimately culminating in all managed Network and Information services over the remainder of the decade and beyond in the Cyber Security & Resilience Bill (soon to become an Act). H&S can apply this same agility when dealing with technologies that evolve faster than formal assurance cycles can handle – but this has to be grounded in the established outcome-focused approaches which H&S practitioners use. Emergency response and AI: The challenges we face together As AI becomes embedded in operational systems, we face a shared ethical challenge: what happens when generative or agentic AI begins influencing decisions that affect human lives? AI can fail silently, behave unpredictably, or produce content that looks authoritative but is completely wrong. In an emergency, that combination is dangerous. A flawed AI generated method statement, or an automated system that locks a site down at precisely the wrong moment, can turn a manageable event into a crisis. Agentic AI introduces another layer of risk by taking actions autonomously – allocating resources, isolating systems, or prioritising containment over evacuation. cyber’s logical responses may have catastrophic human consequences, and the ethical question around allowing algorithms to make life-or-death decisions with is one that neither profession has fully answered. We need to understand the information which AI uses, but also where we understand the outcomes where we trust AI to deliver value and the guardrails required to surround AI-led activities. We also need to acknowledge the risk of AI eroding organisational governance as ‘muscle memory’ becomes weaker, which will only serve to increase the likelihood of failure or harm. Shared principles, joint testing regimes and clear “human in the loop” safeguards need to be in place to ensure AI enhances, rather than undermines, emergency response.

68

MAY 2026 Volume 48 No.2

Made with FlippingBook - Online magazine maker