A First in the U.S.: California’s AI Safety Law
California introduced the first binding state-level regulation in the United States for advanced artificial intelligence models with the Frontier AI Transparency Act (SB 53) in the absence of comprehensive AI legislation at the federal level. The law applies only to “frontier” models trained with extremely large computational power and to major developers with annual revenues exceeding $500 million. Its aim is to ensure transparency and public oversight without stifling innovation.
Under SB 53, companies are required to establish annual safety and risk management frameworks, publish public transparency reports, report serious safety incidents to state authorities, and provide whistleblower protections. Catastrophic risk is defined through scenarios that could result in large-scale loss of life or billions of dollars in damages. Companies that fail to comply with the rules may face civil penalties of up to $1 million.
Although the law is narrower in scope than the EU Artificial Intelligence Act, it adopts a more advanced approach to public transparency. Experts argue that SB 53 positions California as a test bed for AI governance, while also setting a potential precedent for other states and future federal regulations.