Skip to content
Menu
Menu
1280x250

OpenAI Disputes Claim It Violated California AI Safety Law

OpenAI rejects allegations that its latest model release failed to comply with a state law requiring AI safety safeguards.


OpenAI disputed claims by The Midas Project, a technology watchdog group, that the company violated California’s Transparency in Frontier Artificial Intelligence Act (SB 53) when it released its latest coding model, according to the group’s posts on X.

The Midas Project posted on X that OpenAI “appears to have violated” the law by releasing its GPT-5.3-Codex model without implementing safeguards that the subgroup says were required under OpenAI’s own safety framework and thus under SB 53’s compliance mandate.

SB 53 requires “large frontier developers” of advanced AI models to publish and adhere to a publicly available safety framework that explains how they address risks, including catastrophic outcomes that result in significant injury or damage, and to follow that framework when deploying models. The law also prohibits misleading statements about compliance and authorizes the California attorney general to enforce civil penalties for violations.

According to the watchdog’s X post, OpenAI classified GPT-5.3-Codex as a “high risk” model under its internal Preparedness Framework but did not apply special safeguards before making it publicly available. The group said the omission appears inconsistent with the company’s description of required safeguards for models with elevated cybersecurity risk.

OpenAI disputed the allegation in its own safety reporting on the model release, telling Fortune that GPT-5.3-Codex “completed our full testing and governance process, as detailed in the publicly released system card, and did not demonstrate long-range autonomy capabilities based on proxy evaluations and confirmed by internal expert judgments, including from our Safety Advisory Group,” and therefore, extra safeguards were not triggered.

In the safety report, OpenAI also said the language in its safety framework describing when additional safeguards are required is “ambiguous” and that its explanation in the released report reflects the company’s intent.

Tyler Johnston, founder of The Midas Project, said in a separate post that the potential violation was “especially embarrassing given how low the floor SB 53 sets is: basically just adopt a voluntary safety plan of your choice and communicate honestly about it, changing it as needed, but not violating or lying about it.”

Nathan Calvin, vice president of state affairs and general counsel at advocacy group Encode, echoed criticisms of OpenAI’s defense in an online post, writing, “Rather than admit they didn’t follow their plan or update it before the release, it looks like OpenAI is saying that the criteria were ambiguous. From reading the relevant docs … it doesn’t look ambiguous to me.”

OpenAI’s legal compliance status under SB 53 remains unsettled; the California attorney general’s office did not comment on whether an investigation has been opened, but did say it is “committed to enforcing the laws of our state, including those enacted to increase transparency and safety in the emerging AI space.”

The dispute marks one of the first public tests of California’s frontier AI law, which aims to ensure that developers of large advanced models adopt and implement safety practices that align with their public disclosures ahead of releases.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.