The lawsuit challenges a newly enacted state law that would impose disclosure and risk management requirements on “high-risk” AI developers.
xAI filed a lawsuit against Colorado’s Attorney General, Philip Weiser, challenging Senate Bill 24-205, a law that mandates disclosure and risk-mitigation rules on developers of high-risk AI systems used in decision-making areas such as education, employment, financial services, and health care.
The company said in its complaint that the law violates the First Amendment by limiting how its AI systems generate responses. xAI argued that the measure would require it to actively monitor and adjust its systems to avoid outcomes the state considers discriminatory, such as differences in results tied to factors like race, gender, or age.
xAI flatly stated that its flagship AI tool, Grok, would have to “conform to a controversial, highly politicized viewpoint” rather than remain objective.
Further, the company stated that the law conflicts with federal authority and places burdens on interstate commerce. The company said the requirements are “preempted by federal law and violate the Constitution,” and asked the court to block enforcement before the law takes effect on June 30th.
The Colorado Attorney General’s website says the law is intended to “protect consumers from algorithmic discrimination in consequential decisions made by high-risk artificial intelligence systems.”
The measure also requires developers to conduct impact assessments evaluating potential risks to individuals, including the likelihood of discriminatory outputs, and to document steps taken to address those risks. In addition, companies would be required to provide consumers with disclosures when AI systems are used to make or support consequential decisions, including information about how the system functions and how individuals may appeal or correct decisions.
Enforcement authority would be granted to the Colorado attorney general, who would be able to seek civil penalties for violations.

