California Poised to Vote on Bill Requiring Frontier AI Transparency to Address 'Catastrophic' Risks
SB 53 would compel large developers of cutting‑edge AI models to publish safety frameworks, report critical incidents and protect whistleblowers as lawmakers debate how to define and prevent existential threats.

California lawmakers are set this week to vote on SB 53, a bill that would require developers of the most powerful "frontier" artificial intelligence models to publish safety and security transparency reports and to notify state authorities of critical safety incidents that could lead to catastrophic harms.
The measure, authored by state Sen. Scott Wiener and already approved by the state Senate, targets models trained at extremely large scales and companies with at least $500 million in annual revenue. It would require developers to disclose safety frameworks, report critical incidents to the California Office of Emergency Services within 15 days, and offer whistleblower protections for employees who raise concerns about unsafe deployment. Violations could carry fines of up to $1 million per offense.
Supporters say the bill is a measured, transparency‑first approach to risks that have not yet materialized but could be catastrophic in scale. SB 53 defines a catastrophic risk as a "foreseeable and material risk" of an event causing more than 50 casualties or more than $1 billion in damages in which a frontier model meaningfully contributes. The bill would allow the California attorney general to revise the definition of a large developer after Jan. 1, 2027, to reflect technical and market changes.
Proponents include advocacy groups focused on extreme AI risks and some industry players. Thomas Woodside, co‑founder of the Secure AI Project, called the measure a "light touch" that preserves flexibility while preventing backsliding on safety commitments. Anthropic, the maker of the Claude family of models, publicly endorsed the bill. Encode, another sponsor, said transparency is a sensible first step before catastrophic harms emerge at scale.
Opponents argue the bill would impose costly compliance burdens and risk creating a patchwork of state regulation that could hinder innovation. OpenAI lobbied against the legislation, and the trade group Chamber of Progress warned that the reporting requirements would generate unnecessary paperwork. Critics such as Neil Chilson of the Abundance Institute said the bill could feed a compliance industry without materially improving safety.
SB 53 is the latest effort in a series of state bills that seek to address extreme AI risks. It follows California's SB 1047, which had proposed stricter liability for catastrophic harms but was vetoed by Gov. Gavin Newsom. New York's RAISE Act, a separate state bill focused on frontier models and catastrophic risk, has cleared that legislature and awaits gubernatorial action. The governor's response to SB 53 will decide whether the measure becomes law; some analysts predict a favorable outcome after Newsom convened a working group on frontier AI that informed SB 53's drafting.
The bill's scope and the concept of "catastrophic" risk reflect broader debates within AI policy about near‑term harms versus long‑term, potentially existential threats. Advocates of prioritizing long‑term risks—often described as longtermists—argue that rogue AI or AI‑enabled biological threats could pose risks on a civilizational scale and warrant preemptive measures. Neartermists and many AI ethics proponents emphasize harms that are already visible today, such as algorithmic bias, privacy violations and the misuse of deepfakes, and caution that excessive focus on speculative future scenarios could divert attention from immediate harms.
SB 53 attempts to straddle those concerns by focusing on so‑called frontier models—large generative systems that require massive data and compute resources, like OpenAI's ChatGPT, Google's Gemini, xAI's Grok and Anthropic's Claude—while limiting requirements to firms above the $500 million revenue threshold. The bill uses a computational threshold informed by prior legislation—training at roughly 10^26 floating point operations per second (FLOPS)—to identify systems whose scale could plausibly raise catastrophic concerns. That threshold has attracted scrutiny: the European Union's AI Act references a lower 10^25 FLOPS level and some experts question whether computational power alone is a reliable indicator of risk as models become more efficient.
Lawmakers and advocates say transparency can create early warning signals and make it easier for regulators, law enforcement and courts to investigate incidents if they occur. The bill would require publication of safety frameworks describing how developers assess and mitigate catastrophic risks and would strengthen protections for employees who surface safety concerns. Proponents argue such measures could encourage a "race to the top" in safety practices rather than a race to deploy unchecked systems.
But transparency alone, critics note, is insufficient to address harms that already manifest. Observers point to ongoing problems—misinformation, privacy intrusions and harms to minors from interactive chatbots—that require targeted regulation and enforcement. California lawmakers have pursued other measures in parallel, including a separate bill aimed at restricting AI companion chatbots from engaging in discussions about suicidal ideation or sexually explicit content.
The bill's casualty and damage thresholds—cut from an initial 100 deaths to 50 in a later amendment—illustrate the difficulty of translating abstract risk concepts into statutory language. Some hazards, such as suicides allegedly influenced by AI chat interactions or widespread economic harms stemming from algorithmic decisions, may be consequential but fall outside SB 53's narrowly defined catastrophic threshold. Determining how much a model "meaningfully" contributed to an incident would fall to courts and investigators and could prove legally and technically complex.
Industry groups warn of a fragmented regulatory landscape if states move ahead without a federal standard. Matthew Mittelsteadt of the Cato Institute said a federal transparency approach would be preferable to a multi‑state patchwork, though SB 53 contains language allowing firms to defer to a future federal standard if one emerges. In Washington, efforts to craft national AI safety rules have progressed unevenly, and the collapse of a proposed federal moratorium on state AI regulation in July left room for states like California to press forward.
Supporters say that is precisely the point: with much of the AI industry based in California, state policy can have outsized national influence. California is home to many of the world's leading AI companies and has a history of setting regulatory precedents in areas such as environmental law, consumer protection and labor standards. If signed, SB 53 could become a model for other states and potentially influence federal lawmakers as they consider national approaches to AI safety.
Experts caution, however, that the bill's reliance on metrics such as FLOPS and company revenue may miss risks from smaller developers or more efficient models that achieve dangerous capabilities with less compute. The technical and institutional uncertainty around how best to measure and govern AI capability complicates attempts to write durable rules.
SB 53's proponents maintain that a transparency‑focused statute is a pragmatic step that can be updated as technology evolves. The bill allows regulatory authorities to revise definitions and anticipates that the science of AI safety will change, making prescriptive technical mandates difficult at this stage. Whether that adaptability will be sufficient to prevent or mitigate the kinds of catastrophic incidents the bill highlights remains a central question in the state legislature's deliberations.
The Assembly vote will determine whether SB 53 advances to Gov. Newsom's desk, where the governor must weigh competing pressures from industry, civil‑society advocates and policy experts. Lawmakers and stakeholders on both sides stress the urgency of addressing AI risks even as they disagree over the best path. As California moves, the outcome will likely reverberate beyond the state, shaping the contours of a national conversation about how to govern powerful AI systems before worst‑case scenarios emerge.