Core 11: The Change Makers' Manual

Digital Innovation & Entrepreneurship

development since the industrial revolution. It’s like a gold rush, with the emphasis on being the first to develop new technology and get it out there. “As a result, AI has been unleashed on companies and governments and they have to adopt it to survive. The question is, how do they do that in a responsible way?” There is no doubt that AI – an umbrella term for a range of emerging technologies including machine learning, deep learning neural networks, and large language models (LLMs) – poses huge challenges, or that recent progress has surprised and alarmed experts who have worked on it since the 1980s. However, current models like ChatGPT are unlikely to emulate science fiction’s all-conquering automatons – at least not yet. In reality, the threats created by AI are not shared equally by humanity as a whole (see p33). The greatest risks are faced by those sectors of society that are already vulnerable to disruption. Nick Chater, Professor of Behavioural Science at WBS, worked with Hinton 30 years ago and continues to work at the intersection of cognitive science and neural networks, editing the book Human-Like Machine Intelligence in 2021. “ChatGPT is incredibly good at collecting the information you ask for, compressing it, and filling in the gaps,” he says. “If you start typing in a well- known line from Shakespeare, it will finish that speech for you. But when it reaches the end, it will keep going. It will try to improvise and write gibberish. “As impressive as these large language models may be, they are still only query answering machines. You put information in

would prefer more targeted regulation of algorithms.

one end and it shoots out the other. “They aren’t mulling things over and they certainly aren’t plotting world domination. That’s the positive side, but it doesn’t mean it’s wrong to be worried. “Recent breakthroughs have been truly astonishing – even Geoff [Hinton] didn’t see them coming. That is really unusual. These innovations do exactly what we expect or a bit less. “It means we can’t be sure what will happen next. When the first nuclear explosions were carried out, some scientists said they didn’t know if it would break down space and time and destroy the Earth. They didn’t think that would happen, but they couldn’t be sure. AI is similar.” Mitigating the risks Professor Chater is part of the £8 million TANGO project, funded by the European Union’s Horizon Europe research and innovation programme to develop trustworthy AI. His focus will be ‘virtual bargaining’ – the process of deciding how to act in a given situation by imagining what we would do if we had time to negotiate with the other party, then reach an implicit agreement – and how AI might replicate this vital human process. “Comprehension and negotiation are the cornerstone of human social interaction,” he says. “These are the challenges that must be met to create AI systems that work collaboratively alongside people, rather than serving as valuable computational tools. We aren’t much closer to that. “A more pressing concern is that systems like ChatGPT could become the arbitrator of all information. There is a real danger that people will become reliant on these models for information. It would give tech companies – and the governments

“Concerns that much of the content generated is fake or low quality are already outdated. AI is changing at an exponential rate. “The rate of ‘hallucinations’ is now down to three or four per cent, and while a lack of creativity was once a fault line, there is now ample evidence to suggest otherwise. It can even write code. “There are certainly challenges for businesses. For example, LLMs can pose legal dangers if the content produced is not regularly scrutinised. Privacy is another issue. “Companies, like the rest of us, will have to fine-tune their response. “If we manage to adapt to this fast-moving situation, then all of us stand to benefit. As astonishing and alarming as it may be, we should embrace AI and harness it for the common good.” Healthy progress WBS researchers are already working on AI tools to help tame the Wild West of social media by protecting children and vulnerable adults from harmful content (see p22).

These ‘robocop programmes’ could force social media giants to take greater responsibility for creating a safer space online, without resorting to the kind of widespread censorship that would create concerns about freedom of speech. Healthcare is another area where the potential of AI has generated great excitement. An ageing population has brought more complex care needs and rising costs – a challenge exacerbated by the fallout from the Covid-19 pandemic. AI offers the tantalising prospect of tools that could ease the pressure on overstretched medical staff and resources. Hospitals from Scotland to Switzerland are now trialling AI as a diagnostic tool for radiologists in the hope that it will accelerate and augment the decisions they make. However, an in-depth study in the US has tempered expectations. Three departments at a major hospital used AI tools to examine breast cancer, lung cancer, and bone age scans. In all three, radiologists found

behind them – unbelievable power. “That is one of the huge challenges facing regulators.” Another is the lack of a universal approach to regulation in different countries. The Covid-19 pandemic demonstrated how difficult it is to achieve consensus and co-ordinated action across continents, even in the face of a common threat. How to respond to the rapid progress of AI is provoking similar differences of ideology. Europe seems set to take a leading role in seeking to regulate AI, hoping that the controls it puts in place will be adopted by multinational companies that are keen to do business on the continent and cascade around the world, much like they did for GDPR.

James Hayton, Associate Dean of WBS and Senior Research Fellow at the Institute for the Future of Work, says: “Governments and governing bodies across the world are rushing to understand how to balance the need for social protections, with the desire to facilitate rather emphasising protection. As a result, there is a clear risk that restrictive policies in one jurisdiction may drive innovation to other countries, to the expense of the cautious nation.” It is an added incentive for than constrain innovation. “The UK is emphasising innovation, while the EU is governments, regulators, and individual organisations to embrace the opportunities that AI has to offer. The UK could be one of the biggest beneficiaries, as it is better AI-prepared than many countries. McKinsey predicts that AI will increase GDP by 22 per cent by 2030. It could contribute close to £800 billion to the UK economy by 2035. The key will be to adopt AI in a socially responsible way that recognises both the risks to firms and stakeholders whose lives will be affected (such as using the SISA strategy tool advanced by Professor Sotirios Paroutis, see p20). Professor Ram Gopal is Head of the Gillmore Centre for Financial Technology, which is developing a new GPT focussed on 500 research papers that its academics have produced, making them more accessible to a wider audience. “In my view, this is not the time to shy away from generative AI,” he says. “We need to recognise that the genie is out of the bottle. We cannot go back, so we ignore this growth at our peril.

“The threats created by AI are not shared equally by humanity as a whole”

The US is some way behind but is also considering stricter rules in the wake of an open letter from the Future of Life Institute in March, calling for AI labs to pause training systems that would be more powerful than GPT-4 for at least six months to allow regulators time to catch up. The letter was signed by more than 1,000 tech leaders, including Apple co-founder Wozniak, Twitter owner and Tesla CEO Musk, and deep learning pioneer and Turing Award-winner Yoshua Bengio. China, on the other hand,

Warwick Business School | wbs.ac.uk

wbs.ac.uk | Warwick Business School

30

31

Made with FlippingBook Learn more on our blog