New California Laws 2026
SB 53: Balancing AI innovation with public safety
By Allison Melendez S B 53, or the Transparency in Frontier Artificial Intelligence Act, was signed into law on Sept. 29 by Gov. Gavin Newsom. SB 53 marks the first time a U.S. state has passed a law specifically regu- lating the development of AI. California is known as the birthplace of tech, and the AI industry continues to thrive in the state. This law places a set of requirements, protections and structures aimed at balancing AI innovation while protecting public safety. The law seeks to preserve and boost innovation while simultaneously guarding against risks of mis- use. Additionally, it aims to build trust and account- ability around “frontier” AI development. Because California is home to many of the world’s largest AI companies and a large portion of AI-related talent and investment, SB 53 could set the precedent and provide a framework for other states and/or federal policy shaping how AI is governed nationwide. SB 53 only applies to “frontier” models, or those that are developed by large AI developers meeting certain thresholds. It is meant to create mandatory, standardized and objective reporting by frontier de- velopers to provide the government and public with timely and accurate information. The law focuses on catastrophic risks such as misuse of bioweapons, loss of control, large-scale cybersecurity threats or other existential-scale harms. It doesn’t directly address issues such as misinformation, bias or privacy abuse. The framework is meant to document technical and organizational protocols to manage, assesses and mitigate catastrophic risks. Again, this applies to large developers or anyone who has trained or initiat- ed the training of a frontier model. A frontier model is defined as a model using a quantity of computing power greater than 10^26 integer. Some of the pro- visions apply to all frontier developers, but the bulk apply solely to large developers, which are defined as developers that, collectively with their affiliates, had annual gross revenue of $500 million. Large frontier developers will be required to write, implement and comply with, and clearly publish on its website an “AI framework” that describes its ap- proach to a variety of topics including, but not lim- ited to: catastrophic risks, mitigation, internal gov- ernance, etc. Large frontier developers must update their framework at least once per year. These large developers are also required to submit a summary of any assessments regarding catastrophic risk to
the Office of Emergency Services. Developers who do not comply may face civil penalties enforceable by the state attorney general. While SB 53 aims to create standards to increase public trust, it falls short in that it focuses primarily on very large AI developers. It is certainly one step forward in building safeguards with the growing use of AI. However, there is a potential concern that state- by-state regulation could lead to patchwork of laws which could make compliance more complex for AI companies operating in multiple jurisdictions. SB 53 is certainly a starting point, but there will need to be more clarity as to what these frameworks require.
Allison Melendez is an attorney at KENT | PINCIN.
12
Made with FlippingBook - PDF hosting