22409 - SCTE Broadband - Aug2025 COMPLETE v1

FROM THE INDUSTRY

Transparency and Technical Safeguards Any potential regulatory framework will require some level of transparency and technical safeguards to ensure that AI is not operating as a black box. We need to know how the algorithms are fed and on what data to ensure that creators are fairly compensated and we aren’t introducing systemic biases into what will become ubiquitous technology. In other words, whatever system is chosen, it needs to be tracked and policed to ensure compliance. And this ‘policing’ needs to be paid for.

AI as a Public Resource Some experts advocate treating advanced AI systems like public utilities or natural monopolies. This would work similarly to electricity companies, for example, where the national grid is seen as a natural monopoly and the government implements certain standards and expectations for managing it as a public resource. n Private companies would continue developing AI, but under enhanced regulatory oversight n Transparency requirements would include regular audits and public reporting

Regarding training data, the EU leans toward an opt-in model that requires permission before using copyrighted material. For businesses operating in Europe, this means higher compliance costs but also a clearer regulatory landscape and potentially greater consumer trust.

The United Kingdom: Innovation First

The UK is positioning itself as a pro- innovation hub, with proposed regulations that favour AI developers. Their controversial opt-out model places the burden on creators to explicitly mark their work as not available for AI training. Critics argue this approach unfairly disadvantages individual creators who may lack the technical knowledge or resources to implement protection measures. It essentially tells creators that someone can use their property unless they implement specific (as yet not developed or available) technical measures – measures many creators may not understand.

This will entail:

n Provenance tracking: Requiring AI companies to maintain comprehensive records of their training data sources, making it possible to audit systems and verify compliance. n Attribution systems: Building technical capabilities into AI systems that credit original creators when their work influences outputs. n Content authentication: Developing standards for watermarking or otherwise identifying AI-generated content to distinguish it from human- created work.

n Universal access provisions would ensure broad distribution of benefits

n Price controls or licencing requirements would prevent monopolistic practices

This approach draws from how telecommunications, electricity and other essential services are regulated in many countries. It acknowledges both the innovation potential of private enterprise and the public interest in fair, accessible AI systems.

The United States: Sector-Specific Approach

The US has thus far avoided comprehensive AI regulation, instead

focusing on sector-specific guidelines and voluntary frameworks. This creates a more fragmented landscape where rules for using data may vary significantly across industries. This approach offers flexibility but creates regulatory uncertainty for businesses, who may need to navigate different requirements across sectors and jurisdiction, which in turn increases costs and risks. Conclusion The current regulatory vacuum around AI’s use of data cannot persist indefinitely. Whether through government regulation, industry self-regulation, or landmark legal cases, new frameworks for managing AI’s relationship with human creativity must emerge. The businesses that thrive won’t be those extracting maximum short-term value from unregulated data harvesting, but those building sustainable models that respect and reinforce the creative ecosystem upon which AI ultimately depends.

These technical measures would address what many legal experts

consider a fundamental flaw in current AI development: the lack of transparency about what data is being used and how it influences outputs. The publishing industry could offer a useful approach. Copyright registration systems and ISBN standards create a framework for tracking and attributing written works. Similar systems could be developed for AI training data, creating both accountability and the technical infrastructure for fair compensation. The International Regulatory Landscape

Different regions are taking distinctly different approaches to AI regulation:

The European Union: Comprehensive Protection The EU’s AI Act takes a risk-based approach, imposing stringent requirements for high-risk applications while allowing more flexibility for lower-risk uses.

www.container-solutions.com

44

SEPTEMBER 2025 Volume 47 No.3

Made with FlippingBook - Online magazine maker