AI in Medical Communications: Why Scepticism is a Professional Discipline
By: Lee Gazey
AI is now ubiquitous across industries, and especially in communications, the promise of speed and efficiency is proving tantalising for clients and management teams alike. But are we risking quality and credibility in the pursuit of short-term gains?
A recent debate at the International Society for Medical Publication Professionals (ISMPP) conference put this topic front and centre.
Four panellists sought to cut through much of the hype to explore where AI’s promises hold up, where they fall short, and what this means in practice for Medical Communications.
Titled Faster, better, cheaper: the value proposition of AI in Medical Communications, each panellist provided a distinct perspective on AI’s perceived benefits. None of the arguments stood up in isolation but taken together they pointed to a conclusion the industry still struggles to say out loud.
AI can help protect quality, but it can’t be responsible for it.
The session opened by examining credibility, recognising that medical publications can live or die by it. Errors don’t just damage reputation, they introduce regulatory risk, erode clinician trust and, in the worst cases, can harm patients.
Used well, AI can scan vast volumes of data, surface inconsistencies and flag potential errors. Acting as a tireless research assistant, it helps reduce cognitive load for human teams. The benefit here isn’t speed (we’ll get to that), it’s quality control.
It’s a persuasive argument, but it comes with an important caveat. AI doesn’t understand meaning, intent or consequences. It can highlight issues, but it can’t judge what is ethically sound or clinically appropriate. Those decisions still require human judgement and responsibility.
Quality only improves when humans remain accountable for interpretation and intent. When that authority is handed to the technology, quality suffers.
Speed and the productivity paradox.
One of the most compelling arguments for AI is speed; the promise to be able to do more, faster. For clients, the appeal is obvious. Fast delivery is typically associated with lower costs, higher output and better value for money.
In theory, AI should remove dull and repetitive tasks, freeing people to focus on higher-value work. In practice, learning to use AI well takes time and effort. Teams must decide where it adds value, understand how to apply it effectively, and consistently review and correct its outputs. In Medical Communications, where quality and trust are non-negotiable, time saved through automation is often reallocated to oversight.
This helps explain why the productivity gains promised by AI don’t always materialise. Until organisations resolve how AI fits into ways of working, speed alone may remain more aspiration than reality.
Expanding what’s possible and deciding what’s appropriate.
Rather than focusing on quality or speed, the third argument framed AI as a way of making previously uneconomic work viable - enabling broader literature surveillance and structured summarisation for internal planning as examples. Outputs that once required too much time or resource could suddenly be delivered. In this view, AI doesn’t just help MedComms teams do the same work faster, it expands what they’re able to offer.
It’s an appealing proposition, but familiar tensions quickly re-emerge. If teams are already stretched, where does the additional time come from? And just because something can be done more easily, does that mean it should be?
The final panellist argued against wholesale adoption of AI, not because the technology lacks power, but because it comes with real trade-offs. AI is environmentally costly, legally ambiguous, prone to over-confidence, and leads to outputs that are merely “good enough”, a standard that sits uneasily with the ‘credibility is everything’ mantra in Medical Communications.
The answer, he suggested, isn’t rejection, but discipline. Work should be broken into small, bounded tasks, with deliberate decisions about what AI should do, and what must remain human. The greatest risk isn’t that AI will replace people, but that it will be trusted too quickly in areas where judgement still matters most.
From conversations with others at the conference, one thing was clear. The teams getting the most value from AI aren’t its loudest advocates, they treat scepticism as a professional discipline.
What scepticism looks like in practice.
This discussion resonated strongly with our usage of AI in Medical Communications at Madano. AI is here to support our teams and enable us to provide higher value to our clients. For example, it helps our medical writers in the earliest stages of quality control - not to replace judgement, but to surface inconsistencies and potential errors before human review.
Rather than assuming AI makes work faster by default, we invest in targeted training so teams understand where it can add value and where it can slow us down. That’s how we ensure any time freed up is real and does not compromise quality. This allows our most valuable resource, our consultants, to focus on the things that will have the greatest positive impact on our clients' businesses - problem solving, strategic counsel and innovative thinking. All areas where human expertise still makes the greatest difference.
So, what side of the debate are we on?
At Madano, we love the idea of using AI to help us do things that were not previously possible, but we never lose sight of the fact that our most precious commodity, the thing that keeps our clients coming back year after year, is our people.