California Enacts AI and Social Media Laws to Protect Children
California has enacted a sweeping set of artificial intelligence and social media laws aimed at protecting children in online environments. The legislative package, signed by Governor Gavin Newsom, introduces new obligations for app and platform providers that offer services to minors, ranging from social media companies to developers of AI-powered chatbots.
Under the new laws, providers of chatbot platforms are required to detect users who exhibit signs of self-harm or suicidal ideation and to respond appropriately in such cases. They must also clearly disclose that each interaction is generated by artificial intelligence and provide break reminders for child users. App stores and operating system providers are also required to implement age-verification mechanisms to limit children’s access to harmful or age-inappropriate content. In addition, social media platforms are expected to display warnings about the potential mental health impacts of prolonged use.
The legislation also introduces stronger legal sanctions against deepfake abuse and seeks to prevent companies from evading liability for harms caused by AI systems by claiming that algorithms act autonomously. At a time when several U.S. states are moving to develop their own AI policies, California’s move highlights that children’s well-being is becoming a more visible consideration in the regulation of digital products and services.