Snapchat has already made ‘AI friends’ a built-in feature on their platform, inviting users (mostly children) to form digital friendships. Services like these are obviously not capable of offering efficacious care to people who are genuinely in need of treatment, but with such a demand for services like this it seems likely that the mental health field will respond in some way. So why did I stop the launch of my therapy bot? I felt that I was standing at Oppenheimer’s switch, Oppenheimer being the man who led the Manhattan Project that assembled the atomic bomb. The ethical tension for him was multifaceted – if the project was successful it could end the world war and secure world peace, but setting off an atomic bomb could also set the atmosphere of the earth on fire and destroy all life on earth. Why risk it? Because of a unique external pressure – the Nazis were building a bomb of their own, quickly. The question wasn’t if but when a bomb would go off, and who would be on the receiving end. It might seem dramatic to compare my therapy AI bot to an atomic bomb, but there certainly is the potential for real harm with this technology. As I’ve talked to colleagues about this, most bring up concerns about the bot leading people down the wrong therapeutic path. What if someone was suicidal, or a danger to others? Can AI be trusted to navigate those circumstances? Honestly, that’s not where my concern lies – I believe AI chatbots will soon be the go-to solution for suicide hotlines and domestic violence calls. I believe this because I spent time watching engineers mould this technology, and I’ve seen what’s ‘It might seem dramatic to compare my therapy AI bot to an atomic bomb, but there certainly is the potential for real harm with this technology’
possible. It will feel human enough. In fact, the technology is advancing so quickly that when we get the data back my prediction is we’ll see bots are more effective at de-escalating suicidal ideation than humans. I didn’t pause the building of my version out of fear that AI therapy would ultimately fail at providing helpful care. I paused
what questions to ask, how to word things, what psychological vernacular to use. The common user does not. I saw an opportunity. I connected with a development team, started the engineering process and spent several thousands of dollars to produce a therapist AI bot to my liking. I had the bot read all of my favourite books, listen to all my favourite lectures, and listen to/ read the entirety of my public work (hours of podcasts and videos and articles). The bot had the wisdom of my heroes and the tone and presence of my voice. I went through the process of getting legal permission to use the books I trained it on. I even gave it upgraded security to ensure client confidentiality. My lawyer had drafted the disclosure forms, but then… I paused. I sat at my computer beholding something like a digital therapeutic Frankenstein’s monster and felt a hesitancy to pull the lever that would bring it to life. Sending my creation out into the world didn’t feel right. Like with Frankenstein’s monster it was composed of many parts. It resembled me in some ways but not in others. It was taller and stronger than I am, it had a bigger brain, it didn’t need to sleep or rest – but it wasn’t human. Oppenheimer’s switch It may be beyond many of you why I’d even pursue such an endeavour in the first place – the ethical nightmares alone would prevent most people from even considering such a thing. But I knew I wasn’t the only person with this idea. Hundreds of these bots have already hit the market. Currently, you can find AI chatbots that will respond with interventions in CBT, ACT, or DBT – for free. It’s my prediction that many prominent figures in our field will license their likeness and image to companies that create personalised AI bots and avatars. This has already happened in other industries – you can pay to chat nonstop with AI mock-ups of influencers like MrBeast (the largest YouTuber on the planet) or talk with celebrities like Kendall Jenner. The marketing from some of these products invites young customers to ‘share [their] secrets’ and ‘tell [them] anything!’.
because I’m worried about the consequences of its success. The trade
Every technological change is a trade – one way of life for another way of life (hopefully a better one). The problem is that we often can’t fully see what the final trade will truly cost us until it’s too late. For example, before Thomas Edison invented the phonograph, songs would be sung at most communal gatherings. Specific songs were passed down from generation to generation, encapsulating communal values, mythology and history. When I put on my ‘house music’ Spotify playlist during dinner with friends I wonder if something valuable was lost in the phonographic trade. Sure, the playlist sets a nice atmosphere, but if it weren’t so socially strange I’d much rather that my friends and I spontaneously burst into song on a regular basis. Could Edison have predicted that his invention would one day reduce communal singing to religious gatherings, choirs and karaoke bars? I’m not saying the phonographic trade wasn’t worth it – I enjoy listening to music. But it’s worth noticing what media ecologist Neil Postman puts so well: ‘If you remove the caterpillars from a given habitat, you are not left with the same environment minus caterpillars – you have a new environment. The same is true if you add caterpillars to an environment that has had none. This is how the ecology of media works as well. A new technology does not add or subtract something. It changes everything. In the year 1500, 50 years after the printing press was invented, we did not have old Europe plus the printing press. We had a different Europe. After television, the United States was not America plus television; television gave a new colouration to every political campaign, to every home, to every school, to every church, to every industry. Therefore, when
20 THERAPY TODAY MAY 2024
Made with FlippingBook Online newsletter maker