Go to contents page
AI in decisioning: a calculated risk
For these and other reasons, the FCA has intimated that it may not be possible to manage AI within the existing regulatory framework. Fine-tuning what we have already may not be enough, it suggests, so a new approach could be needed. It seems to be making it clear that while lenders might like the idea of AI, they should be very careful to ensure they do not lose the ability to explain to the regulator and the customer why precisely they were (or were not) granted credit. Trying to reverse-engineer an AI algorithm-based decision in front of a tribunal will not cut it. Given the level of uncertainty surrounding the use of AI, we at LendingMetrics certainly think caution should be exercised. It may be wise to wait for more visibility around the level of risk posed by the technology, and further clarity from the regulator. Lending that is not underpinned by rigorous, documentable decision making has always been unwise. The finance industry has had to learn that lesson the hard way. It is undoubtedly one we should not forget. David Wylie is Commercial Director of LendingMetrics.
We asked ChatGPT for comment on the use of AI in credit decisioning. Here’s what it had to say... Pros: Increased efficiency: AI can automate many tasks that were previously done manually, resulting in increased efficiency and cost savings. Better risk management: AI can help identify and manage risks in financial portfolios, which can reduce the likelihood of losses. Improved fraud detection: AI can be used to identify fraudulent transactions and prevent them from occurring, improving security. Better customer service: AI-powered chatbots and virtual assistants can provide 24/7 customer service, answering customer queries and resolving issues quickly. Enhanced personalization: AI can analyze customer data to offer personalized financial advice and recommendations. Cons: Limited human oversight: AI systems can make errors and produce unintended consequences, which can be difficult to identify and correct without human oversight. Bias: AI algorithms can perpetuate biases if they are trained on biased data or if the people designing them are not diverse. Ethical concerns: The use of AI in finance raises ethical concerns around privacy, data security, and transparency. Lack of accountability: It can be difficult to hold an AI system accountable for its decisions and actions, which can create legal and ethical challenges. Reduced human interaction: AI may reduce the need for human interaction in finance, which can negatively impact the customer experience and lead to job losses.
New faces, new growth Artificial intelligence is set to change the face of the finance industry in the years ahead whether we like it or not. David Wylie asks whether lenders are fully aware of the regulatory and reputational risks it poses.
Glance through the marketing material of software platforms and it will not be long before you notice a new emphasis on the use of artificial intelligence. Some technology providers present it as the new frontier for finance providers looking to further optimise their decision making. What better way to introduce efficiencies, than to delegate the whole process to a hyper- sophisticated algorithm? The marketing pitch is seductive, but it bears some investigation. If you drill down into the products that such software companies are promoting, you often discover that they do not really amount to what most would define as artificial intelligence. It’s easy to co-opt the ‘AI’ moniker to make something appear more impressive than it actually is. Perhaps this is just as well, because the technology is not something regulators are too enthusiastic about. In the UK, US and Australia, they have expressed misgivings about AI’s use when generating lending decisions.
Their fear is that, if done prematurely, it may actually not improve decision- making at all, and lenders had better beware of the consequences . The US’s Consumer Financial Protection Bureau has cautioned lenders and intermediaries that ‘agency’ cannot be attributed to AI systems, given that this risks removing accountability for decision-making away from firms. Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions, it cautions. The law gives every applicant the right to a specific explanation if their application for credit is denied, and that right is not diminished simply because a company takes credit decisions using a complex algorithm that it doesn’t understand. The bottom line is that complex algorithms must provide specific and accurate explanations for denying credit applications, it says. Reading between the lines, the implication is that many AI platforms cannot do this and therefore run the risk of future liability claims. In the UK, this issue was identified in a recent Bank of England/FCA report, which suggested ‘lack of AI explainability’ posed a potential reputational and regulatory hazard. [Bank of England/ FCA: Machine Learning in UK Financial Institutions]. The implicit question again posed is: would a company be able to justify its decision when facing a mis- selling claim?
It is not that the AI necessarily made the wrong decision - it may well be the right decision - it is whether the lender is able to demonstrate to a client how the decision was arrived at in the first place. That the decision is comprehensively evidenced is particularly important because AI is already known to be prone to what is referred to as AI bias, AI model risk, or, in everyday parlance, the law of unintended consequences. Model bias occurs during the AI training process and can ‘bake-in’ certain outcomes. Automated model-selection tools can exacerbate risk, as can incomplete datasets. For example, the historical gender data gap that has given us more male- oriented data than female, could well lead to skewed lender affordability decisions based on sex. Furthermore, the monitoring of such risk within AI is introduced ‘post- implementation’, where ‘hyper-tuned’ models can be highly susceptible to data drift and lack of meaningful oversight. These biases would be next to impossible to defend if they were found to underpin poor decision making. Should their existence be established, for example via a mis-selling tribunal, they would make any lender vulnerable to additional actions, where all those that feel similarly impacted would have a right to redress.
12 | RedAmberGreen
Q1 | 2023
www.lendingmetrics.com
RedAmberGreen | 13
Made with FlippingBook - professional solution for displaying marketing and sales documents online