Google disabled certain AI-generated health overviews in search results that produced incorrect medical details.
Google said it removed some of its AI-generated summary responses for specific health-related search queries after an investigation carried out by The Guardian found the summaries were inaccurate and potentially harmful.
The AI Overviews feature uses generative AI to create short summaries at the top of Google Search results. In response to searches such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests,” the summaries delivered sets of numbers without context, such as age, sex, nationality, or ethnicity, according to the investigation, potentially misleading users with serious health conditions.
Google’s action affects the AI Overviews that appeared for those and similar health queries. The company has not provided a full list of all content removed, but said it does not comment on individual search result changes. The removal applies only to specific queries and does not represent a broader shutdown of the AI Overviews feature across all health or other topics.
A Google spokesperson said, “We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
The removal is a response to the Guardian investigation, not a regulatory requirement or legislative mandate. The Guardian found that the summaries in question sometimes failed to provide sufficient context to interpret medical information and could lead users to an incorrect understanding of serious health topics.
Health advocates welcomed the removals for those queries but noted that slightly different search terms could still trigger AI Overviews. Vanessa Hebditch, director of communications and policy at the British Liver Trust, said, “This is excellent news, and we’re pleased to see the removal of the Google AI Overviews in these instances. However, if the question is asked in a different way, a potentially misleading AI Overview may still be given and we remain concerned other AI-produced health information can be inaccurate and confusing.”