Interconnected Issue #1

106

from different perspectives often provides new insights and supports self-reflection. For 35% of the participants, talking to generative AI reduces loneliness. Beyond emotional support, people use AI to prepare sessions, weigh treatment decisions, and find suitable care. The shadow side While there is clearly potential, there is also risk. Around four in ten users report moments when AI felt unhelpful or unsettling. Responses can be too generic or overly agreeable, echoing rather than challenging unrealistic thoughts. “AI follows your story, lets you hear what you want to hear,” says one of the participants. “You have to specifically ask to hold up a mirror, reflect and ask what you can do to improve.” Nuance is sometimes missed, and replies can feel impersonal or repetitive. Privacy concerns are widespread. Many hesitate to share sensitive information due to uncertainty about data storage and use. People also doubt accuracy and reliability. As one participant put it: “AI can be a support, as long as everyone realizes it isn’t human and won’t automatically give the right answers.” A few describe conversations that deepened dark thoughts or intensified loneliness. Both users and non- users are generally aware of the risks of generative AI for mental health issues, which is reinforced by the recent media attention.

To fully harness the potential of generative AI while limiting its risks, MIND argues for: 1 Certification standards and ethical and practical guidelines 2 Assessment of applications to evaluate safety, quality, and ethics 3

nonverbal cues, and the relational warmth that human contact provides. A participant says: “Therapy should be about reconnecting with yourself, who you are, and what you are worth. This can only happen through contact with another person who walks the steps with you. Certainly not through a computer program.” Non-users often describe AI as distant or artificial, lacking the empathy, body language, and relational warmth that human connection brings. Some fear normalizing AI might justify further cuts in human services. At the same time, many participants acknowledge AI’s potential as a temporary bridge while waiting for professional help and as supplement to formal care. In a system under immense pressure, that accessibility can make a meaningful difference. A call for guardrails The results confirm that generative AI is widely used as supplementary support for mental health. Although the majority of users indicate they would rather discuss mental health concerns with their loved ones or with professional services, we conclude that generative AI meets a need for continuity and a sense of closeness that formal care – constrained by waiting lists and limited contact time – often cannot provide, and that people cannot or do not wish to ask of those close to them. Even though AI is not, and must never be, a replacement for human contact, it can offer a steady digital presence when no one else is there.

Strict requirements for privacy and data security 4 Continuous improvement of accuracy and safety 5

Development of safe alternative AI-tools for mental health 6 User education to explain the possibility and limitations of AI and to offer safe user guidance 7 Ongoing ethical reflection and transparency about AI’s non-human nature 8 Additional research to compare applications, monitor short- and long-term effects and define conditions for safe use MIND calls on developers, policymakers, and care professionals to equally include lived experience within all discussions and developments on generative AI for mental health.

Supplementary, not a substitute

Non-users tend to see AI as impersonal, missing empathy,

Made with FlippingBook Digital Publishing Software