Finance & Markets
runaway computer bugs, such as the one at investment bank AXA Rosenberg in the late 2000s that selected all the wrong securities to invest in and consequently crashed a major pension fund. Nowadays, investment management firms like Rothko Investment Strategies are pioneering the use of AI to drive their stock selection, making it all the more important to incorporate interpretability into their modus operandi, so they can explain their investment decisions to their clients. Part of the Rothko story is its collaboration with academia to research new developments in AI, and it is a key participant in the Gillmore Centre for Financial Technology at Warwick Business School. There is a clear competitive advantage for those financial companies that have the knowledge to engage with top-flight academic institutions. Meanwhile, those dyed-in-the- wool organisations that cannot grasp the transformative impact of this technology are failing to invest in the kind of human resource required to meet the challenges and opportunities presented by this new era. A new way of working The sort of human resource needed amounts to a new way of working. As well as the business analysts trained in such subjects as statistics and econometrics who traditionally hang around financial service operations, corporate bosses are now having to add teams of data scientists, data engineers and machine learning experts who have completely different skillsets. To integrate and develop these different skills will require savvy management and a good deal of strategic imagination
from team heads caught on the fintech frontline. One solution is to have a policy of staffing hybrid teams with ‘T-shaped’ teams who are specialists in at least one area and knowledgeable in several others. However, as the AI revolution truly takes off, this may well not be enough. Over the longer term, we are likely to see larger companies hit the mergers and acquisitions trail, taking over smaller innovative firms who have the right AI setup and know-how. At the same time, there will always be the clear and present danger posed by bad actors, who impersonate well-known companies with the help of AI to defraud consumers. “There is no doubt that there will be business casualties in this new age” AI professionals – whether they are working for banks, credit card firms or retailers – face an ongoing battle to train algorithms to detect this fraudulent activity more efficiently and accurately. Despite the challenges involved, AI represents the future and offers exciting opportunities to bring positive change to the fintech arena. It is already transforming the consumer experience beyond anything that was thought possible even a few years ago. For example, large language models will soon be taking digital linguistics to such a point that they will be able to make the most complicated of
financial products comprehensible to the consumer, and therefore more transparent and accessible. This is just one quick win. Another is efficiency. Using blockchain and cryptocurrency technologies, decentralised finance, otherwise known as DeFi, is beginning to circumvent traditional banking systems to empower people to make their own direct transactions. This will benefit companies such as exporters. By cutting out the financial middlemen like banks and lenders, they could well pay much less for their export finance in the future, thus generating better profits. How people and companies navigate these huge changes, however, remains the big question. There is no doubt that there will be business casualties in this new age of disruption. Equally, there are undeniably huge opportunities for ambitious firms prepared to formulate a cogent AI strategy and figure out how they are going to resource it. That is why researchers at the Gillmore Centre are exploring all of the challenges and opportunities outlined above, creating a pool of expertise that forward- thinking firms can access. Knowledge partnerships between companies and research labs, such as the one between Rothko and the Gillmore Centre, can provide fintech firms with a launch pad for success. Organisations that fail to keep up with the rapid pace of change are likely to be eclipsed.
TO THE CORE
1. Firms must overcome issues with compliance, interpretability and new ways of working to ensure they use AI in ways that are safe, transparent and accountable. 2. There is no substitute for the ability to interpret decisions made by AI to ensure the company is complying with professional standards and regulations.
3. Forging ‘knowledge
partnerships’, like the one between Rothko and the Gillmore Centre for Financial Technology, can help firms
to create and resource a strong AI strategy.
firms will have to adapt to survive in this futuristic landscape. To do so, they must face up to a range of issues to ensure they use AI in a way that is safe, transparent, and accountable. That is where the Gillmore Centre for Financial Technology, established at Warwick Business School to provide thought leadership and research, can help. How to comply with AI Many of the issues firms face stem from the very nature of AI – that we are asking machines to solve problems and make decisions without any human involvement. If we are always deferring to a bot, for instance, how do we know we are complying with professional standards and regulations? This consideration is particularly important for retailers, insurers or lenders, where fairness is a legal requirement and duty of care is sacrosanct. For example, if you perform a transaction within this sphere, it is essential that you can justify why you made a particular type of loan instead of another. The emerging class of software applications known as regulatory
technology can help in this respect. Regtech companies are already active in the marketplace, from verifying the identities of people who open new accounts digitally to monitoring compliance for information security laws. However, there is no substitute for the ability to interpret and explain decisions. A new generation of machine learning models is making this possible. The conventional ‘black box’ models spotted patterns in masses of data and spewed out a solution with no explanation. The new kids on the block are much more interpretable, offering experts an insight into what caused those patterns to emerge. The Decision Tree algorithm is one example of this machine learning. Following a tree-like model of decisions and their possible consequences, it allows the human operator to understand the relationship between different inputs and identify the most significant features that contribute to the final decision. Bringing human experts back into the loop in this way is important if they are to track
I t took centuries for astronomers to discover that a strange lunar dance with the sun’s rays were behind the moon’s shapeshifting. Now, it takes just seconds for causal machine learning to explain why we see a full moon one evening and a crescent a few nights later. AI has clearly reached a tipping point. It can calculate the cause of a company’s fluctuating stock price, identify investment opportunities, predict market trends and report these movements to customers, and even detect the first signs of fraudulent activity. The possibilities are enormous. So are the challenges. When Geoffrey Hinton – the Godfather of AI – left Google, he warned of the need for a ‘Geneva Convention’ to avert the existential threat that technology could pose to the human race. Even if we avoid a scenario that includes autonomous weapons and labour market implosion,
Read more Core Insights on Finance and Markets from Warwick Business School.
Sustainable Development Goals
Warwick Business School | wbs.ac.uk
wbs.ac.uk | Warwick Business School
44
45
Made with FlippingBook Learn more on our blog