The big issue
fake religions, exponential blackmail and scams, automated cyberweapons and exploitation code, biology automation, counterfeit relationships ‘What if it could be used to bend vulnerable people in the direction of certain beliefs?’ AlphaPersuade is particularly concerning to me. If that’s not a familiar term to you, I think I can best explain it this way. Let’s say I make two commercials. Both are identical, except one has slow, emotional music behind it, and the other has music with a more uplifting tone. I send these versions to two different groups of a few hundred people and see which one produced the most sales. If the slow, and AlphaPersuade. Being too good emotional song garnered 20% more sales, then I know it’s more profitable. I can then broadcast that ad to thousands of people and make 20% more than if I had used the other ad. That’s an example of simple A/B testing in marketing. Now, what if you were able to do that with persuasive arguments? In a way, we already do this by testing psychological interventions in a controlled setting, but what if the available tools were far more granular? What if the AI could see what arguments worked on which demographics of people – some respond to shame-based arguments, some to appeals to empathy, some to fearmongering, and some to evidence and hard facts. An advanced AI would know not only what arguments are most compelling to whom, but what phrases to use at what point in the argument to have the highest statistical chance of persuading the user. This is the concern with AlphaPersuade – a bot that’s so effective at persuading users, it could function as a weapon of mass cultural destruction. You can already see examples of how this kind of technology has been problematic in the wrong hands. In 2019 a report in MIT Technology Review
an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis. This is serious business.’ It’s not just that we make a one-for-one trade when a technological innovation occurs, the environment as a whole changes. The ability to listen to recorded music didn’t just change how we listen to music but how we relate to music as a whole (and perhaps to each other). If such a powerful transformation occurred simply by recording music, what kind of transformation can we expect from the growing presence of AI in our lives? We can already see glimpses, since AI has been operating under our noses for well over a decade, under a different, innocuous name – algorithms. At the 2023 Psychotherapy Networker Summit, Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology and creators of the documentary The Social Dilemma , referred to algorithm-based social media platforms as the ‘first contact’ our culture had with AI. They surmised that while we readily embraced the benefits of social media algorithms we also opened the door for unpredictable and unpleasant things like social media addiction and doomscrolling, influencer culture, QAnon, shortened attention spans, heightened political polarisation, troll farms and fake news. Of course, as Harris and Raskin point out, social media companies weren’t trying to ruin people’s lives – they had well- intentioned goals like giving everyone a voice, connecting people to old and new friends, joining like-minded communities and enabling small businesses to reach new customers. Companies like OpenAI and Google have similar positive intentions with this new wave of AI technologies. Harris and Raskin explained that AI will boost our writing and coding efficiency, open the door to scientific and medical discoveries, help us combat climate change and, of course, make us lots and lots of money. But what can we anticipate this trade costing us? Harris and Raskin offer a range of possibilities, some of which are already present – reality collapse, trust collapse, automated loopholes in law, automated
revealed that 19 of Facebook’s top 20 pages for US Christians were run by Eastern European troll farms. 1 On top of that, troll farms were behind the largest African American page on Facebook (reaching 30 million US users monthly) and the second-largest Native American page on Facebook (reaching 400,000 monthly users). It’s suspected that these groups, mainly based in Kosovo and Macedonia, were targeting Americans with the intent of stirring up conflict and dissent regarding the 2020 US presidential election. Their success in accumulating and manipulating over 75 million users is, in no small part,
thanks to this ‘first contact’ with AI. While you might worry about the
consequences of an AI therapist handling an ethically ambiguous situation poorly, have you stopped to realise the dangers of it being too good? What kind of power is endowed to the individual or corporation who has data from thousands of personal counselling sessions? What’s to stop them from creating a powerful AlphaPersuade model capable of statistically anticipating and manoeuvring conversation to dismantle ‘cognitive distortions’ or ‘maladaptive thinking’? What if it could be used to bend the mental health of vulnerable people in the direction of certain beliefs or agendas? If you could convince the masses of anything, would you trust yourself to hold such a power? I certainly would not. Dark magic I’m aware of how extreme and hyperbolic these concerns may seem – and I hope I’m simply making too much of a small thing, but Oppenheimer had hoped his concerns were inflated as well. After all, the likelihood that the atmosphere would explode was infinitesimal according to calculations (but not zero). Like Oppenheimer I felt external pressure to produce something people are already in the process of making. Oppenheimer’s choice has not led to the end of the world yet but will it? I certainly hope not. AI hasn’t led to a detrimental ecological shift in psychotherapy, nor in the psychology of mankind as a whole. Perhaps the trade will be worth it. If AI therapy bots will give thousands (perhaps
MAY 2024 21
THERAPY TODAY
Made with FlippingBook Online newsletter maker