Interconnected Issue #1

30

There are also more and more people turning to AI for companionship. Research from Common Sense Media found that 72% of teenagers in the US have had an AI companion, with a third reported forming emotional relationships with them. This highlights a very real need. Young people need connection and comfort – and they look for it online. But it also raises questions - what happens if a young person turns to a chatbot that has no clinical expertise and no way of stepping in when someone is at risk? OpenAI has admitted that its safeguards don’t always hold up in long conversations and has promised stronger protections in future versions of ChatGPT. We hear from young people every day who are navigating digital landscapes that are both supportive and unsafe. A recent project we worked on with charity The Diana Award, shared experiences from young people reminding us just how real online challenges are. For many, cyberbullying, harmful content and tricky experiences with digital tools are everyday realities. This is why safe and clinically proven digital spaces are so important.

How do we create safe digital spaces? We have been providing digital mental health services in the NHS for over 20 years, and Kooth recently introduced Soluna, our US service, to all 13- to 25-year-olds in California – which has supported over 130,000 young people since launch in 2024. Across the US and the UK, our services are available to over 20 million people. As early pioneers, we’ve established a blueprint for how digital technology can be harnessed to widen access to evidence-based, high-quality support, without barriers to access such as referral thresholds, waiting lists, or stigma. Our focus has always been on ensuring people can access effective help at the earliest point of need – with safety from harm as a prerequisite, not an afterthought. How does AI fit into this? In recent years, we’ve been exploring the potential to embed AI within our own services. The stakes in mental health are uniquely high, so our approach has been deliberate, grounded in ethical frameworks and best practice. We believe AI should enhance and augment services for the benefit of both users and staff. That means focusing on areas where it can genuinely improve safety, efficacy, and wellbeing. Our AI development pipeline focuses on addressing workforce challenges common amongst mental health practitioners like burnout, unconscious bias, or safeguarding fatigue.

The time is now to shape a technological revolution where safety comes first.

Made with FlippingBook Digital Publishing Software