Insights News

After the hype: How can GPT models benefit the communications industry

By Darren Fleetwood, Head of Data & Analytics
By Darren Fleetwood, Head of Data & Analytics


The recent release of OpenAI’s latest iteration saw opinions on GPT-4 back at the top of our social media feeds. If they ever went away. Madano’s point of view combines anti-hype and hype – it’s not that GPT has no value, it’s potentially transformational – but you need expertise to extract it and not get hurt in the process.

But if you can access the right support, what communications tasks can GPT practically help with? Let’s start at the conceptual level.

GPT (and similar) models are good at…

  • Ideation or inspiration: good for writers’ block or the kind of block you get thinking of ideas for a snappy title, brand name or strapline. It’s not likely to give you the answer directly, but it might help you get there.
  • Intelligent (semantic) search: finding the needle in a haystack based on meaning not just keyword matches – key publications, tweets or media articles on a noisy subject for example
  • Extraction / summarisation: extracting and summarising information from large amounts of text – useful as an initial step for things like systematic literature reviews or rapid evidence reviews
  • Highly structured text generation, such as code for programming
  • Translation: many of the GPT models are multi-lingual and can do the job as well as or better than something like google translate
  • Limited forms of analysis: e.g. understanding the simple sentiment of text when that’s appropriate or useful.

GPT models are bad at…

  • Generating only factual information:​ unless provided with external knowledge or pointed only at sources you know contain the answer you’re looking for
  • Accuracy and trustworthiness: called ‘hallucination’ in the jargon, GPT models can be very good at fabricating responses, particularly if they are not able to ​answer (and not explicitly told not to)
  • Unbiased text generation​: gender, racial and other biases are well documented in GPT output
  • Judgement, strategy and planning​: there is no reasoning happening when GPT models generate text, though the ways responses are written may give that illusion
  • Common sense and language nuance: the models have no common-sense understanding of the world and find it difficult to detect nuances like metaphors or irony.
  • Security/confidentiality: everything put into the ChatGPT interface at the moment could be used as training data – so don’t use it for anything confidential, embargoed, or not for the public domain!

Then what can these model practically do?

Given this assessment, there are specific tasks where it’s likely to be appropriate to work with GPT models:

  • Not safety or mission critical: tasks that require good, but not perfect, accuracy and are unlikely to be external-facing
  • Opportunities for efficiency: where significant resources are required for repeat manual work
  • Adequate oversight: where human supervision is baked into the process, or easy to add, while still increasing efficiency
  • Well-defined tasks: where you can limit the possibility of bias or fabrication and control the sources GPT draws from

Five tangible examples

More tangibly then, what research, data and communications tasks might fit these criteria?

  • As stimulus for brainstorming: we all know that it can be difficult to get brainstorming off the ground or to move thoughts on if a group get stuck. Using GPT in preparation or as part of live sessions could speed these things up. Ideas still need scrutiny, (yes, there is such a thing as a bad idea), but some oil in the gears can help.
  • Literature reviews: rigorous and comprehensive literature reviews are time consuming and require a lot of manual work, often reading hundreds of papers at a time. And this work is mostly done by people whose most valuable expertise is understanding and interpreting the content of these papers, not just reading them. GPT models, effectively trained and pointed at the right data set, can eliminate the leg work and allow the experts to do the things they are experts at.
  • Internal communications: GPT models can be used to draft internal company communications such as newsletters, announcements, and updates. By automating these tasks, communications professionals can focus on more strategic initiatives.
  • Topic discovery and media monitoring: AI models can be used to analyse large datasets, such as news articles, social media posts, and online discussions, to identify emerging trends and topics of interest. Using these approaches means that communications programmes can stay ahead of the curve and develop content that resonates with their target audience.
  • Content optimisation: GPT models can be used to help analyse and optimise existing content for search engine optimisation (SEO). By identifying and suggesting keywords and topics that are relevant to the target audience, AI can help improve content visibility and drive more organic traffic to clients' websites.

These case examples give us a peak into the future – and it’s a future where GPT and other large language models play a significant role. But this is not an existential threat; it’s an opportunity. Rather than GPT replacing the communications professional, it’s more likely that communications professionals effectively using GPT models will replace communications professionals who are not.

Written by Darren Fleetwood, Head of Data and Analytics, who recently spoke at a TechUK event on ChatGPT.

×

Search madano.com