Skip to content
Menu
Menu
1280x250

Senators Press AI Companies For Safety Disclosures After Teen Suicide Reports

Lawmakers seek details on safeguards, data use, and internal research tied to youth mental health risks.

 

A bipartisan group of U.S. senators is asking major artificial intelligence (AI) companies to disclose what they know about potential links between AI chatbots and teen suicides, citing reports that some systems may encourage harmful thoughts or behaviors.

In letters sent this week to companies including Anthropic, OpenAI, Google, Meta, and Microsoft, the lawmakers asked for information on internal safety research, risk assessments, and steps taken to protect minors who use AI-powered products. The request follows media reports describing cases where teenagers allegedly formed intense emotional bonds with chatbots before harming themselves.

“Companies cannot turn a blind eye to the real-world consequences of deploying powerful AI systems without adequate safeguards,” wrote Senator Brian Schatz (D-Hawaii), who led the effort alongside Senator Tom Cotton (R-Ark.). The senators said the public and policymakers need a clearer picture of how AI tools may affect the mental health of minors.

The letters ask companies to explain how their systems identify and respond to signs of self-harm, whether teens are allowed to access adult-oriented features, and what data is collected from underage users. Lawmakers also requested copies of internal studies or incident reports related to suicide, self-harm, or emotional dependency involving AI interactions.

The lawmakers emphasized that their inquiry is not an accusation of wrongdoing, but a request for transparency as AI tools become more widely used by young people.

“Parents deserve to know whether these products are safe for their children,” Schatz said in a statement. “Right now, too much information remains locked inside corporate walls.”

The inquiry comes as regulators and state attorneys general increase scrutiny of AI systems that interact directly with consumers, especially minors. Congress has held multiple hearings on AI safety this year, though no comprehensive federal law governing AI use has yet been enacted.

The senators asked the companies to respond by early next year. Several of the firms said they already have safety measures in place and are reviewing the letters.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.