00259 New Laws 2026 FLIPPINGBOOK

New California Laws 2026

SB 243: New safety rules for AI companion chatbots

By David Lisson, David I. Feinstein and Thomas Floyd A focus of the rapidly evolving artificial in- telligence (AI) regulatory landscape is AI “companion” chatbots—systems designed to mimic human behavior and interact with users across multiple sessions. Senate Bill 243, signed by Gov. Newsom on Oct. 13 and effective Jan. 1, 2026, will require companies that offer companion chatbot systems to California users (operators) to implement new safety guardrails, including notice requirements and protocols for responding to users in crisis. SB 243 defines “companion chatbots” as AI sys- tems with natural language interfaces that provide humanlike responses and are “capable of meeting a user’s social needs,” including by sustaining relation- ships across multiple interactions. The law exempts several common AI chatbot applications, including those used only for certain tasks such as customer service, internal productivity purposes, research or technical assistance. Also carved out are chatbots within video games that can discuss only game-re- lated topics—aside from certain high-risk topics related to health and sexuality—and voice-activated virtual assistants on consumer devices that do not sustain relationships across interactions and are not likely to elicit emotional responses from users. There are two separate user notification require- ments for covered companion chatbots. First, they must provide clear and conspicuous notice that the companion chatbot is not human if a reasonable per- son would be misled to believe otherwise. Second, for all users the operator knows are minors, the sys- tem must disclose that a user is interacting with AI, provide recurring notifications every three hours, and institute measures to prevent the companion chatbot from producing sexually explicit materials or encouraging the minor to engage in such conduct. Operators of companion chatbots must also em- ploy protocols for responding to expressions of sui- cidal ideation or self-harm by users that include, but are not limited to, referring users to crisis service providers. Operators must measure for suicidal ide- ations using evidence-based methods, although SB 243 offers no guidance on suitable methods. SB 243 also establishes a new reporting regime for operators of companion chatbots. Starting July 1, 2027, operators must submit annual reports to the Office of Suicide Prevention detailing their protocols for responding to suicidal ideation by users and for preventing the chatbot from engaging with users

about suicidal ideation, as well as the number of times the operator referred users to a crisis service provider in the preceding calendar year. Additional- ly, operators must publish data from these reports, as well as details about their chatbots’ protocols, on their websites. Although SB 243 does not provide for civil penal- ties, it does allow a private right of action for any per- son harmed by a violation of the law for the greater of actual damages or $1,000 per violation, injunctive relief and reasonable attorney’s fees and costs. Given the ubiquity of AI-powered chatbots, compa- nies should assess the applicability of SB 243 to their tools to determine whether they constitute “compan- ion chatbots” or fall within the exempted use cases. Should the law apply, companies will want to docu- ment the evidence on which their protocols rely to as- sess user expressions of self-harm and the standards they employ to evaluate explicit or similarly covered content, and they will need to monitor the required reporting metrics on a regular basis as part of their oversight and governance functions. David Lisson is a partner, David I. Feinstein is counsel, and Thomas Floyd is an associate at Davis Polk & Wardwell LLP.

14

Made with FlippingBook - PDF hosting