Bill would require developers and users of certain artificial intelligence systems to implement safeguards and transparency measures.
Washington state lawmakers introduced House Bill 2157, advancing a proposal to regulate so-called high-risk artificial intelligence (AI) systems used in consequential decisions, including employment, housing, credit, health care, education, and insurance.
The bill, sponsored by House members including Rep. Cindy Ryu (D-Shoreline), would require developers and deployers of high-risk AI systems operating in Washington to take “reasonable care” to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. It would also establish documentation, transparency, and risk management requirements for both developers and deployers of these systems.
Under the proposal, a deployer could not use a high-risk AI system to make a “consequential decision” without implementing a risk management policy and completing an impact assessment that addresses the system’s purpose, risks, inputs, and outputs. Developers would be required to disclose information about a system’s intended uses, limitations, and performance, and to make materials available to help deployers complete required impact assessments.
Consumers would have to be notified when they are interacting with a high-risk AI system, and if an adverse decision affecting them is based on data beyond what they provided, deployers would have to disclose the principal reasons for that decision and the degree to which the AI system contributed.
The bill’s definitions exempt certain systems acquired by the federal government, regulated financial institutions, specified insurers, and select health care entities. High-risk generative AI systems would be subject to additional requirements for output identification and accessibility.
Rep. Cindy Ryu said state oversight is necessary “in the absence of federal guidelines and regulations” to protect individuals from algorithmic discrimination.
The bill comes as several states have either introduced or passed similar laws despite the Trump Administration’s executive order to quell state AI legislative efforts in favor of future Federal legislation.
HB 2157 has been referred to the House Technology, Economic Development & Veterans Committee. If passed, its provisions are slated to take effect Jan. 1, 2027.