Google's artificial intelligence (AI) summaries are exposing users to serious health risks by providing inaccurate and confusing medical information, a Guardian investigation published on 2/1 revealed.
For instance, Google AI advised pancreatic cancer patients to avoid high-fat foods. Medical experts condemned this as "completely wrong guidance, directly threatening patients' survival". Anna Jewell, Director of Support and Research at Pancreatic Cancer UK, explained that patients following this advice would not consume enough calories, leading to weight loss. A depleted physical state would leave them too weak for vital chemotherapy or surgery.
Similar inaccuracies appeared in liver function test results. The AI tool offered "normal" indicator definitions that diverged significantly from medical reality. This caused severely ill liver disease patients to mistakenly believe they were healthy. Pamela Healy, Chief Executive of the British Liver Trust, described these summaries as "alarming", highlighting the particular danger since liver disease often lacks symptoms in its early stages, making testing crucial for detection.
![]() |
Illustration: Caia Image/Alamy. |
Search results for gynecological cancer tests also contained inaccurate information. Doctors expressed concern that this could lead patients to complacency, causing them to overlook actual disease symptoms.
Experts warn that prioritizing misleading information in prominent search result positions directly threatens public health. Stephanie Parker, Digital Director at the charity Marie Curie, observed that users frequently search while anxious. Receiving inaccurate or decontextualized information can lead to severe consequences.
In response to the allegations, a Google representative claimed the examples were "incomplete screenshots". The American tech company stated its results always cite reputable sources and recommend consulting experts. Google maintained that its AI Overviews feature, which uses generative AI to summarize core information on any topic or question, is "useful" and "reliable".
The Guardian's investigation adds to growing concerns about the reliability of AI-generated data. A previous study in 11/2024 revealed AI chatbots providing misleading financial advice. Experts caution that users often implicitly trust AI-generated data, leading to poor decisions in health and finance alike.
Binh Minh (According to The Guardian)
