Skip to content
Menu
Menu
1280x250

Anthropic Publishes Compliance Framework Ahead Of California AI Transparency Law Taking Effect

The framework outlines how the company will comply with California’s Transparency in Frontier Artificial Intelligence Act (SB 53) by assessing and disclosing catastrophic risk management practices.


Anthropic, an artificial intelligence developer, published a compliance framework ahead of the January 1, 2026, implementation of California’s Transparency in Frontier Artificial Intelligence Act, also known as Senate Bill 53. The framework describes how the company will assess and manage catastrophic risks from advanced AI systems and fulfill disclosure requirements under the new law.

SB 53, enacted by the California Legislature and signed by Governor Gavin Newsom in late September 2025, establishes transparency and safety obligations for frontier AI developers. The law applies to companies whose AI systems meet specific “frontier” criteria, requiring them to publish risk assessment frameworks, report critical safety incidents, and include whistleblower protections.

Anthropic’s publicly released “Frontier Compliance Framework” details how the company evaluates and mitigates risks such as cyber offense, chemical, biological, radiological and nuclear threats, AI sabotage, and loss of control for its most capable models. It also lays out a tiered system for evaluating model capabilities, explains mitigation approaches, and describes protections for model assets and incident response procedures.

The document will serve as Anthropic’s compliance framework for SB 53 and other regulatory requirements, while the company’s Responsible Scaling Policy remains a voluntary safety policy outlining broader best practices.

“The law balances the need for strong safety practices, incident reporting, and whistleblower protections – while preserving flexibility in how developers implement these safety measures,” Anthropic said in its announcement.

Under SB 53, covered developers must make key safety documents publicly available, report significant incidents within 15 days, and provide protections for employees who raise compliance concerns or report severe risks. The statute takes effect January 1, 2026, and represents the first state-level requirement for AI safety and transparency in the United States.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.