PENSIONS
so responses can sound very natural. That wide reading does, however, include a lot of extraneous information about pensions in general that you can’t get it to ignore. This amalgamated knowledge can seep into its answers. On testing two AIs, the answers they provided were all very plausible – and that’s where the problems begin. Many members of NHSPS have a basic understanding of the scheme but come to us for more complex questions. Like many of the AIs I’ve tried, the NHSPS- trained bots tended to fabricate convincing but incorrect details, known in the field as hallucinations. These hallucinations can crop up even in their understanding of scheme fundamentals. A simple question about the maximum lumpsum in the most common part of the scheme saw both confidently give a completely inaccurate answer, before going on to add correct and much more complex information about legislative lumpsum limits and the lifetime allowance (LTA). Trickier questions of interpretation (of which we get many) sometimes led to completely wrong answers, which seemed to have been pulled from the rules of an different scheme or, more likely, assembled from the millions of other, non-NHS specific, conversations about pensions in the underlying ChatGPT training library. The effect is very like the problems we come across when well-meaning colleagues have been talking together about their NHS pensions. Misunderstandings become amplified, information that’s applicable to one member is taken as applicable to all and misinformation spreads. We could undoubtedly deploy an AI like this on our
intranet and many people wouldn’t realise they were being given wrong information. This obviously rules it out for use in its current state. "Answering questions, even with complete accuracy, is one thing, but spotting what questions a member should be asking is another altogether" Future promise So, if the problem of hallucinations could be solved and it was possible to restrict technical answers to the data provided about the specific scheme, would ChatGPT be ready to take over? I’d like to think not. I don’t believe I’m being a technophobe or sticking my head in the sand though. Answering questions, even with complete accuracy, is one thing, but spotting what questions a member should be asking is another altogether. The AI works without a sufficiently broad context. You can prompt it to ask questions about the member’s role, membership history etc. and it will, cleverly, try to take those things into account. However, it lacks any wider picture. There’s a big gap between what it wants to tell you and what it should be telling you. (It’s obsessed with the LTA regardless of whether you’re a domestic or a consultant.) Context is vital. An aside about an ex-partner might prompt a pensions expert to check for pension
sharing orders or the need to nominate a new partner for survivor benefits. Knowledge of a pending national pay award or legal consultation might take the conversation in a new direction. It’s reasonable to think that AIs will, one day, be able to take anything available on the internet into account in their replies. However, unless they become much more human-like, it’s difficult to see members sharing the kind of personal details which allow for deeper insights. A good pension conversation is a two-way, co-operative process, with both parties bringing information and understanding being reached between them. That requires empathy, something beyond the reach of AI just yet. Advances have been rapid though and it would be foolish to assume that it’s impossible for this technology to ever replace a human expert. It feels like it’s close right now but that’s no guarantee it’ll get there. That’s been the promise of self-driving cars for the last decade. A technology which always seems to be on the brink but never quite makes the last leap to even imperfectly human levels of reliability. If replacement isn’t on the cards, that doesn’t mean there isn’t a place for an improved version of this technology. If inaccuracy and hallucinations can be squashed, then custom AIs could work as a supplement to human pensions officers. It’s a truism that automation has created more jobs than it’s replaced. The role of the pension professional in the future might involve the development, training or monitoring of scheme-specific AIs. Perhaps, in the future, the concept of talking to a person rather than an AI about your pension may make us even more valued. n
61
| Professional in Payroll, Pensions and Reward |
Issue 99 | April 2024
Made with FlippingBook - Online magazine maker