Interconnected Issue #1

11

clearly understand when they’re talking to AI, not a person. We’ve seen incorrectly programmed chatbots produce misinformation, leading people in the wrong direction with their mental health support — sometimes ending in deaths by suicide. Others have experienced emotional overattachment when they develop deep relationships with their chatbot, only to later realize it is not a real person. There have been alarming examples of AI devices providing insulting responses to those from equity-deserving populations seeking mental health support. A simple online search reveals digital mental health products that have sold personal information — from those seeking mental health services when vulnerable — for marketing purposes. We need safeguards to protect people who reach out in their moment of need so they can focus on getting better and not on worrying about exploitation. These concerns are among the many reasons the Commission has partnered with the Canadian Centre on Substance Use and Addiction to develop upcoming AI guidance, building upon our national e-mental health strategy.

perspectives — from people with first-hand experience, individuals implementing AI, representatives of diverse communities, and scientific minds. Through our federal mandate, we’re working across the country and gathering insights internationally. We’re also learning from global partners through networks like eMHIC. The approach blends pragmatism with inclusivity. We recognize that effective guidance must be accessible, actionable, and adaptable across different mental health contexts. Here’s the good news: If you’re reading this, you’re already part of our community working to create responsible and culturally appropriate mental health care. We hope to release early AI guidance this fall and look forward to sharing it across Canada and globally. Keeping the promise of AI At the Commission, we see the incredible value in responsible AI implementation for mental health. Its positive potential is real, and exciting new products launch daily. With quality programming, AI and other digital mental health products can improve access, personalize care, and help people who haven’t been well served by traditional approaches, but we need to do the next right thing together and ensure digital mental health care is safe. In mental health, one harm is too many when it can be prevented.

ensure tools are safe, inclusive, and effective for everyone.

Protecting the vulnerable People who are unwell and struggling are also vulnerable. They may not be in a position to assess the trustworthiness of e-mental health tools. That’s why, especially in the mental health and substance use health fields, we must address shortcomings directly. Unfortunately, digital mental health tools have not always been safe. As a result, harm has sometimes been done, and there have been some tragic outcomes. For example, we’ve documented cases of transparency shortcomings, misinformation, privacy violations, bias, and technology lacking a human-centred design. In today’s era of increased loneliness, AI mimics human conversation so precisely that people confuse computer- generated responses with actual compassion. People need to

Standing at the AI crossroads: Working in partnership This partnership is poised to

produce Canada’s first guidance specifically for AI in mental health and substance use health. True to form, we’re gathering diverse

Made with FlippingBook Digital Publishing Software