FROM THE INDUSTRY
The Industrial Revolution—and the pivotal role played by James Watt’s steam engine—is often cited when discussing fundamental changes to our economic and social fabric. Since then, we’ve had a number of other “revolutions”. The harnessing of electricity ushered in mass production, electrified cities, and a communications revolution driven by the telegraph and later the telephone. Cars existed prior to Henry Ford’s moving assembly line, but this automotive revolution made them affordable, fundamentally changing global transportation, urban planning and industry. The Space Age—beginning in 1957 with the launch of Sputnik— transformed communication and navigation, and the microprocessor, the Internet, DNA technology and mobile phones all kicked off their own revolutions. But the Industrial Revolution is still the biggie – the most bang-for-your-buck we’ve had in terms of shifting human civilisation. Until now. Artificial Intelligence is coming for James Watt’s crown. And AI’s revolution will be very different. Unlike previous technological revolutions that primarily transformed industries reliant on physical labour, AI’s impact extends to intellectual and creative domains previously considered uniquely human.
available data for AI training. This regulatory vacuum has allowed AI developers to operate under a take-first-ask-later approach, creating multi-billion-dollar technology platforms using content they didn’t create or license.
n Opt-In Model (Advocated by Creator Organisations): AI companies must obtain permission before using copyrighted material, similar to how music licensing works. For businesses, an opt-out system offers fewer obstacles to AI development but creates long-term legal uncertainty. An opt- in system provides clearer legal boundaries but potentially slower access to training data. The UK’s proposed opt-out mechanism is particularly contentious. It’s essentially telling creators that someone can take their property unless they explicitly post a “No Trespassing” sign – in a language that hasn’t been invented yet. Critics argue this approach heavily favours large tech companies, as creators could easily lose rights to their work by simply forgetting to check a box or failing to implement technical measures they may not even understand. Another issue is policing this and ensuring opted out data is not inadvertently or deliberately used. Data Rights and Compensation Models Similar to how music and literary rights work, content creators could receive compensation when their work is used for AI training. This could be done on an ad- hoc basis, like music streaming or through government distribution via a digital tax. n Collective licensing: Creators register with collecting societies that negotiate with AI companies and distribute payments based on usage. This model exists in music with performing rights organisations such as PRS in the UK, ASCAP and BMI in the USA, GEMA in Germany or SACEM in France. n Data dividend: A tax or fee on AI companies based on their data usage, with proceeds distributed to creators. This resembles public lending rights systems in countries like the UK, Canada and Australia, where authors receive payments when libraries lend their books. n Direct licencing: Individual negotiations between major content producers and AI companies, with standardised terms for smaller creators.
Undermining creative industries
The creative sector faces a unique double threat: not only might their jobs be automated, but their existing work could be used to train the very systems replacing them. When AI systems can freely reproduce the style and substance of human creators without compensation, we’re looking at a potentially destructive cycle:
1. AI systems train on human-created content without compensation
2. Economic incentives for human creation diminish
3. New content production declines in quantity or quality
4. AI systems have less novel material to learn from
5. AI outputs become increasingly derivative and homogenised
So, it’s in everyone’s best interests for creators to continue to produce original content by being properly compensated for their contribution to AI learning. But will this happen without regulation? As governments worldwide grapple with these challenges, several regulatory approaches are emerging: Opt-in or Opt-out Models The simplest solution could be to create a system for opting content in or out of AI training models. In theory, this could be quick to implement with minimum complexity. Yet, given that some models are already being trained on copyrighted content (which should already be a legal “opt-out”), it might not be particularly effective. n Opt-Out Model (UK Proposal): Content creators must explicitly mark their work as not available for AI training. This places the burden on creators to protect their content.
AI’s appetite for data
Modern AI systems learn by digesting vast quantities of human-created content. ChatGPT, Copilot and DALL-E are sophisticated pattern-recognition systems trained on billions of examples of human creativity and knowledge. Initially, tech companies trained these models on publicly available data, but as models grew more sophisticated, they required ever more data. Companies expanded their harvesting to include copyrighted content, paywalled articles and private repositories. And that’s a problem for creators relying on compensation for their efforts, skill and talent. Many AI companies delete their training data after models are built, making it nearly impossible to audit models for bias or track the use of copyrighted material. But even if this were not the case, currently, most jurisdictions have no specific regulations governing how companies can use publicly
SEPTEMBER 2025 Volume 47 No.3
43
Made with FlippingBook - Online magazine maker