The proposed legislation would shield large AI developers from liability for catastrophic harm.
OpenAI has endorsed Illinois Senate Bill 3444, known as the Artificial Intelligence Safety Act, a proposed law that would limit when AI developers can be held liable for large-scale harms.
The bill, sponsored by state Senator Bill Cunningham (D-Springfield), establishes liability protections for developers of “frontier” AI models. These are defined as systems trained using more than $100 million in computational resources or exceeding 10²⁶ computational operations.
The protections would only apply in cases of “critical harms,” which the legislation defines as large-scale incidents, including those resulting in the death or injury of at least 100 people, damages exceeding $1 billion, or harms arising from the use of AI in developing or enabling chemical, biological, radiological, or nuclear (CBRN) weapons or similar high-risk scenarios.
Under the proposal, developers are not liable for such harms if they did not act intentionally or recklessly and published a safety and security protocol and a transparency report detailing how risks are assessed, mitigated, and addressed.
The bill also provides an alternative way for developers to meet requirements. It states that a developer that produced a safety and security protocol “in a manner substantially similar to this Act” will be deemed compliant if it either “agrees to be bound by the safety and security requirements adopted under Article 56 of the European Union’s Artificial Intelligence Act” or “enters into an agreement with an agency of the federal government” to enable access to frontier models for research and evaluation, facilitate assessments of risks such as cyber and biological threats, and allow the government to release information related to those evaluations.
Niedermeyer warned against “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety,” calling for alignment with a federal framework. Separately, OpenAI spokesperson Jamie Radice said, “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses.” [Source: Wired]
OpenAI has previously opposed legislative efforts to expand liability for AI developers, largely adopting a defensive stance against stricter accountability measures.

