Skip to content
Menu
Menu
1280x250

GAO Warns AI Tools Are Reshaping Housing Decisions, Raising New Risks For Consumers

Federal watchdog cites concerns around bias, transparency, and data accuracy in home buying and renting.

 

Artificial intelligence (AI) tools are increasingly influencing how Americans buy and rent homes. Federal investigators warn that the technology may also introduce new risks for consumers, according to a new report from the U.S. Government Accountability Office (GAO).

The report examines how AI is being used across the housing market, including to set rental prices, screen tenants, target housing advertisements, and estimate home values. While these tools can speed up decisions and reduce costs for companies, the GAO said they may also rely on flawed or incomplete data, which can disadvantage renters and buyers, particularly those in protected classes.

“AI can make housing processes faster and more efficient,” the GAO wrote, “but it can also perpetuate or amplify bias if not carefully designed, tested, and monitored.”

The GAO found that many AI-driven systems operate as “black boxes,” making it difficult for consumers to understand why they were denied an apartment, quoted a higher rent, or shown fewer housing options online. In some cases, landlords and property managers may not be able to explain how automated tools reached their decisions.

The report also flagged concerns about data quality. AI models often draw on historical housing data that may reflect past discrimination, potentially leading to outcomes that conflict with federal fair housing laws. Errors in underlying data, such as outdated credit or employment information, can further affect results.

“Without appropriate safeguards, AI tools could produce inaccurate or unfair outcomes that are hard for consumers to challenge,” the GAO said.

Federal agencies, including the Department of Housing and Urban Development, have begun issuing guidance on the use of AI in housing, but the GAO said oversight remains fragmented. The watchdog urged regulators to clarify how existing consumer protection and fair housing laws apply to automated systems and to improve coordination across agencies.

The GAO also noted that consumers often lack clear avenues to appeal or correct AI-driven housing decisions, leaving them with limited recourse when errors occur.

The report stops short of recommending new legislation but calls for stronger transparency, better bias testing, and more transparent accountability for companies deploying AI in housing markets as adoption of the technology continues to expand.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.