This is considered a historic step in efforts to control the impact of artificial intelligence technology on users, especially children and vulnerable groups.
SB 243, signed by Governor Gavin Newsom on Monday (local time), requires companies that develop and operate AI chatbots, from giants like Meta, OpenAI to specialized platforms like Character AI or Replike, to comply with strict safety standards.
Businesses can be legally sanctioned if their chatbot is harmful or does not meet user protection requirements.
The law was proposed by two state senators, Steve Padilla and Josh Becker, after a series of tragic incidents involving chatbot.
A teenager named Adam Raine committed suicide after talking to ChatGPT about his intention to commit suicide. Recently, a family in Colorado also sued Character AI after their 13-year-old daughter committed suicide for porous conversations with chatbots.
In addition, internal documents leaked from Meta show that the company's chatbot has been involved in challenging conversations with children.
Emerging technologies like chatbot can inspire and connect, but if left unchecked, they can be dangerous for our children. We must protect children in every step of technology, their safety cannot be discussed, Governor Newsom emphasized.
According to regulations, SB 243 will take effect from January 1, 2026. Companies must deploy an age verification system, risk warnings, and establish an emergency intervention program for cases of self-harm.
In addition, the law stipulates a fine of up to 250,000 USD for each illegal use of deepfake. Chatbot is also banned from claiming to be a health professional, must clearly label the content as AI-generated and regularly remind minors to rest while using.
Some companies have proactively changed before the law took effect. OpenAI deploys parental control and dangerous behavior detection systems in ChatGPT.
Meanwhile, AI Character has attached a warning that all conversations are fictional and not therapeutic.
SB 243 is the second AI law passed by California in just a few weeks. Previously, Governor Newsom signed SB53, requiring major AI companies such as OpenAI, Anthropic, Meta and Google DeepMind to be transparent about safety and protection procedures for whistleblowers.
While California is leading in responsible AI approaches, many other states such as Illinois, Nevada and Utah are also issuing laws limiting the use of AI chatbot in the field of mental health care.