What To Know
- Shortly after receiving detailed evidence of the errors, Google removed AI Overviews for the exact queries “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.
- The statement suggests the company sees the problem as a correctable flaw, not a threat to user safety or a reason to slow the rollout of AI search summaries.
AI News: Rising alarm over AI-driven answers
Google’s bold leap into generative search has taken a troubling turn after investigations revealed that its new AI Overviews feature sometimes produces dangerously inaccurate medical explanations. These short, authoritative-looking summaries appear above normal search results and are framed as trusted snapshots designed to help users quickly understand key concepts. Instead, they have been found to contain errors that health experts say could mislead the public into ignoring serious medical conditions or delaying urgent care.

Google’s AI-driven search answers face scrutiny after dangerous medical errors
Image Credit: AI-Generated
The investigation focused on what should have been a basic question – one frequently typed into search engines every single day – regarding the normal ranges for liver function blood tests. This AI News report frame the moment the issue turned from technical glitch to public risk. Rather than clarifying ranges or providing a warning that reference values differ based on age, sex, and clinical history, Google displayed long lists of numbers with no explanation of how to interpret them. According to liver specialists, the information presented not only lacked context but could easily convince people with dangerously abnormal results that they were perfectly healthy.
Error so serious Google shut down specific summaries
Shortly after receiving detailed evidence of the errors, Google removed AI Overviews for the exact queries “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” The move is significant because AI Overviews are supposed to appear only when its systems have “high confidence” that the answer is accurate. Yet here it delivered something experts called “dangerous” and “deeply alarming.”
A spokesperson for the tech giant declined to comment on the individual removal but said Google constantly improves its systems and takes action when content violates internal policies. The statement suggests the company sees the problem as a correctable flaw, not a threat to user safety or a reason to slow the rollout of AI search summaries. But charities and patient advocates insist the stakes are simply too high to treat these incidents as isolated glitches.
The warnings come from trusted voices
Vanessa Hebditch, communications director at the respected British Liver Trust, welcomed the decision to pull the faulty results but cautioned that more errors could still lurk in disguised queries. Small wording changes such as “LFT reference range” or “liver test ranges” continue to produce misleading information. She warned that liver disease is often silent until very late stages and normal-looking test values can still mask serious problems. Giving a patient false confidence could encourage them to skip appointments or delay diagnosis of cirrhosis or cancer.
The Patient Information Forum, which supports public access to trustworthy medical knowledge, agreed that removal of the search answers is merely the first step. The organization pointed out that millions of people already struggle to navigate health systems and misinformation online. When inaccurate answers are placed at the very top of search pages – and framed with the authority of Google – the danger expands exponentially.
Accuracy still questionable across wider topics
More troublingly, the investigation discovered multiple other AI summaries related to cancer, medication, and mental health that are still active and continue to mislead users.
Google defended some of these examples, saying they were backed by reliable sources and that internal clinical reviewers did not classify them as inaccurate. Critics countered that medical nuance cannot be captured in a three-sentence overview, especially when the formatting makes complex issues appear simple and final.
Technology commentators say the issue is structural, not incidental. Victor Tangermann of Futurism said Google now bears responsibility for errors that previously would have been blamed on third-party websites. By synthesizing results into a single “answer box,” Google inserts itself into the doctor–patient relationship. If the box is wrong, users may never scroll further, never see alternative sources, and never seek real medical guidance.
Others warn that the incentives driving AI Overviews – speed, convenience, and engagement – are directly at odds with what safe health information requires. Short answers without disclaimers may be useful for maths problems or movie trivia, but medical decisions depend on personalized assessment, not generalized averages.
A deeper reckoning ahead
As Google races to defend its 91 percent share of global search from rising AI competitors, it cannot afford missteps that erode public trust. Health experts are calling for stronger safeguards, explicit warnings, partnerships with medical agencies, and even the removal of AI summaries from health categories altogether until they prove reliable at scale.
While Google insists it measures and reviews output quality continuously, medical professionals argue the system is still being tested on the public in real time, without their consent and without guardrails built for safety-critical information. Behind the debate is a fundamental question: should a commercial technology company shape health knowledge for billions of people, or should that role remain with trained professionals, peer-reviewed bodies, and national health services?
The stakes stretch far beyond liver test numbers. Generative AI risks amplifying bias, misunderstanding, and harmful behaviour in every field where accuracy matters most. The wider concern voiced repeatedly is that errors in lifestyle advice or cooking tips may be inconvenient, but a misleading cancer screening summary or psychiatric recommendation could change the course of a person’s life. When misinformation appears dressed as expertise at the top of a search engine used worldwide, even a tiny percentage of incorrect answers could produce vast collective harm. The global community now waits to see whether Big Tech responds with radical transparency and caution, or with incremental patches to preserve momentum.
Media Reference:
https://futurism.com/artificial-intelligence/google-ai-overviews-dangerous-health-advice
For the latest on AI in Health Information, keep on logging to Thailand AI News.