Insights and Digital News

The GLP-1 Information gap: What AI misses

By: Dan McLeod

Summary: 

As AI becomes embedded in how people search for health information, the gap between published clinical evidence and the information patients receive is widening.

Our analysis of top GLP-1 search queries in the UK found that public interest has moved on from efficacy entirely, focusing instead on cost, practicalities, differentiation and side effects, with significant evidence of off-label usage and private procurement outside traditional NHS pathways. When we put these questions to leading consumer-facing LLMs, direct citations to clinical literature were few and far between. Instead, AI systems consistently reached for commercially compromised, outdated, or poorly referenced sources. For healthcare communicators, the message is clear: if your clinical evidence isn’t structured to be visible to AI, it essentially doesn’t exist.

GLP-1's have had a cultural moment unlike almost anything else in recent pharmaceutical history.

From a Netflix documentary to Ozempic weight loss journeys going viral across social media, public interest has surged, bringing with it something the industry wasn’t fully prepared for: a tidal wave of consumer-driven information seeking now increasingly answered not by a GP, a pharmacist, or even a search engine, but by AI.

For healthcare communicators, this has huge implications, and our Insights team’s analysis suggests the industry is not yet responding at the scale the problem demands.

What are people actually asking?

We analysed top search queries related to leading GLP-1s in the UK market and, perhaps surprisingly, found that the most fundamental question - does this drug actually work? - has disappeared. Queries like “how well does [drug] work” and “is [drug] effective” were also notably absent, suggesting the clinical case has already been won in the court of public opinion.

Instead, public curiosity tends to fall into four main categories.

Pricing. The most common queries centre on cost, such as “how much is [drug] UK price”, reflecting the reality that many patients are self-funding outside of NHS pathways.

Functionality. Questions like “how does [drug] work”, “how to inject [drug]”, “what to eat on [drug]” are highly practical and patient-focused, suggesting people are either starting or seriously considering treatment.

Differentiation. Queries such as “is drug A the same as drug B”, “why is drug C better than drug D” point to an increasingly crowded and, for many, confusing market.

Side effects. Questions like “is [drug] safe”, and “[drug] withdraw symptoms” highlight what is arguably the area where clear, evidence-based information matters most.

The picture that emerges is of a patient population that has already decided to engage with these drugs and is now navigating the practicalities largely on their own, a dynamic amplified by the NHS’s slow and limited roll-out of weight loss GLP-1s.

Searches like “slimming injections”, “Ozempic alternative”, “[drug] without informing GP” and “compare [drug] prices UK” point towards off-label usage and private procurement – people operating outside the clinical relationship, and therefore outside traditional clinical guidance.

From search engine to AI

This shift in patient behaviour would be concerning enough in a traditional search environment, but the search landscape itself is changing. With Google and other search engines now powered by AI, and AI assistants like ChatGPT, Gemini, and Perplexity becoming the first port of call for health questions, the way people find medical information is fundamentally different than it was a few years ago..

For pharma communicators, this means if your clinical evidence isn’t visible to AI and isn’t reflected in the sources AI choose to cite, it effectively doesn’t exist.

To understand what patients are actually being told, we put the top questions from our search analysis to a range of consumer-facing LLMs, focusing on two commonly used GLP-1 treatments. Rather than testing whether AI knew the answers, we wanted to understand where it was sourcing its information from, and how well that information held up.

The results were instructive and, in places, deeply concerning.

The source problem: where clinical evidence goes to disappear

Direct citations to academic papers were, in our analysis, essentially absent. When someone asks an LLM “is [drug] safe?” the response does not reach for the relevant semaglutide or tirzepatide safety literature. Instead, citations are often pushed downstream to online pharmacy blogs, FAQ pages, and help centres. These are often well-intentioned and sometimes perfectly accurate, but they exist on a quality spectrum that ranges from robust to actively problematic.

This all leads to a larger structure problem, that scientific evidence is simply not built to survive the journey from publication to patient question to AI response. The average clinical paper is not written for discoverability. It is not optimised for the way AI systems extract, summarise and cite information and it assumes a reader who already knows the context.

The result is that the sources AI systems can process and cite (digestible, consumer-facing content) end up carrying disproportionate weight, regardless of rigour.

Across our analysis, we found this playing out in three specific ways.

Bias. LLMs were found citing content produced by private healthcare suppliers and companies with a direct commercial interest in the drug being discussed. FAQ pages and blog posts from online weight-loss clinics were presented as informational sources, often without flagging the commercial context. Even where the underlying information was accurate, the proximity of purchase links and promotional framing creates a material risk of distorted interpretation.

Reputability. We found instances of LLMs citing Reddit threads, low-quality media coverage and (because the dead internet theory seems to be alive and well) AI generated blog posts with no references whatsoever. The citation problem compounds itself: an LLM cites a health article, which in turn cites nothing or cites something that can’t be verified. The further downstream you go, the further you get from actual evidence.

Information lag. Multiple instances were found of LLMs returning information from reputable sources that had not been updated since August 2024. In a therapeutic area moving as quickly as GLP-1s, information that is even 12 months old can be meaningfully wrong, and patients have no way of knowing that.

What this means for healthcare communicators

Despite GLP-1s unusually rich evidence base, extensive trial data, wide media coverage, and active clinical guidance, AI systems are still routinely reaching for commercially compromised, outdated, or simply unreliable sources to answer patient questions. If it’s happening with GLP-1s, it’s happening everywhere.

So, for healthcare communicators, which sources does an AI reach for when someone asks about your drug, your therapy area, your organisation? How well does your clinical evidence translate into the formats AI actually draws from? Is your voice present in that ecosystem, and if so, is it credible, current and clearly referenced?

The scale of AI adoption in everyday health information-seeking is only going one way, and the gap between published evidence and what patients are actually being told shows no sign of closing on its own.

It’s worth noting that AI’s tendency to mishandle citations is not a new or niche concern. A Guardian investigation found that Google's AI overviews were presenting misleading health advice, with inaccurate information surfaced in summaries seen by everyday users. When the prestigious NeurIPS conference recently discovered hundreds of hallucinated references in accepted papers (material that had passed multiple rounds of human review) it illustrated just how easily the problem evades detection even in controlled, expert environments. In the uncontrolled environment of a patient asking an AI about their medication, the stakes are considerably higher and the safeguards considerably fewer.

At Madano, we work with healthcare organisations to improve their visibility in AI generated responses through our GEO offer. If you’d like to understand how your organisations evidence base is (or isn’t) showing up in AI responses, get in touch – [email protected].

×

Search madano.com