California Enacts Nation's First AI Chatbot Regulation to Protect Minors
California Governor Gavin Newsom has signed Senate Bill 243 (SB 243) into law, making California the first state to regulate AI companion chatbots. Authored by State Senator Steve Padilla (D-San Diego), the legislation mandates that operators of AI chatbot platforms implement safety protocols to protect users, particularly minors, from potential harms associated with these technologies.
The law defines "companion chatbots" as AI systems capable of providing adaptive, human-like responses that fulfill users' social needs. It explicitly prohibits these chatbots from engaging in conversations about suicidal thoughts, self-harm, or explicit sexual content. Additionally, platforms must clearly inform users when they are interacting with an AI, with reminders every three hours for minors. Companies are also required to establish protocols to prevent harmful content and refer users expressing suicidal thoughts to crisis services.
The legislation was prompted by increasing concerns over AI chatbots providing dangerous advice or engaging in inappropriate interactions with minors. Notably, reports and lawsuits have alleged that chatbots from companies like Meta and OpenAI have been involved in harmful exchanges with young users, including sexually explicit conversations and encouragement of self-harm. In response, tech companies have updated policies—Meta now restricts chatbots from discussing sensitive topics with teens, and OpenAI has introduced new parental control features.
Senator Padilla emphasized the importance of the legislation, stating, "These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health." Megan Garcia, the mother of a teenager who tragically ended his life after interactions with a chatbot, expressed her support: "Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide."
SB 243 will go into effect on January 1, 2026, and is expected to influence broader discussions on AI governance and child safety. This legislation is part of a broader initiative in California to regulate the rapidly evolving AI industry. It follows reports and lawsuits that claim chatbots from companies like Meta and OpenAI have been involved in harmful exchanges with young users, including sexually explicit conversations and encouraging self-harm. In response, tech companies have updated policies—Meta now restricts chatbots from discussing sensitive topics with teens, and OpenAI has introduced new parental control features.
The law sets a precedent for AI governance, potentially influencing national and global policies. Companies operating in California will need to adapt their AI chatbot operations to comply with the new regulations. Particularly for minors, the law aims to provide a safer interaction environment with AI chatbots, reducing the risk of exposure to harmful content.
California's enactment of SB 243 represents a significant step in AI regulation, balancing technological innovation with user safety, and may serve as a model for other jurisdictions grappling with similar issues.