Insights News

Generative AI, communications and the year of the election

2024 is the year of elections. Around half of the world’s population will be eligible to vote, with over 64 countries holding national elections, in a world with immediate and widespread access to generative AI. For organisations and communications professionals, this heightens the importance of a well-considered approach to harnessing and responding to AI.

What does it mean for communications?

Last week, Madano hosted an all-star team of experts for the launch event of our latest Orange Paper, to discuss the issues raised: Fool’s Gold: The unintended consequences of generative AI for communications.

Joined by Dr Rupert Lewis (the Royal Society’s Chief Science Policy Officer), Peter Heneghan (co-founder of the Future Communicator, and former UK Government Head of Digital Communications), Richard Fernandez (Director of External and Public Affairs, PRCA) and Katherine Fidler (Science & Tech Editor,, our panellists discussed the challenges of creating policy, reporting accurately, and protecting the public from manipulation given the pace at which generative AI has hit the mainstream.

For the communications sector getting to grips with generative AI, our panel identified very serious adaptation challenges, like maintaining truth, trust and authenticity, with significant threats sitting outside of the current industry. It also had an optimistic sense that, with client reputations on the line, it had the opportunity to professionalise and responsibly use new tools.

How does this vary for politics?

In a year that’s full of geopolitically significant elections, can we say the same in politics? And, just how urgent is the challenge?

With populist candidates in almost every major electorate straining democratic norms, the winner-takes-all nature of politics creates different incentives to utilise AI when compared to those we focused on for organisations in our latest Orange Paper.

As we discussed at our launch event for Fool’s Gold?, regulation of generative AI in election communications is unlikely to keep pace with the ability to deploy the technology, and the challenges are already here. At best, rapid technology progression means this is an emerging policy practice that’s responding to change rather than shaping it, and as a result the regulatory landscape is going to vary significantly from election to election. It many cases, restrictions on usage by political parties will be largely reliant on fair play.

For example, in the United States, generative AI has been used to develop campaign ads, but only around seven US states have proposed to regulate and restrict the use of AI in elections, with limited federal oversight.

Are these problems new?

While misinformation in politics is nothing new, generative AI means it can be deployed faster, at greater scale, and as a result, can push more buttons to see what sticks with potential voters. It also means more people have the means to try and do that. A political party’s supporters may, without being encouraged by the party itself, simply choose to harness AI to try and damage the opposition. The Mayor of London, Sadiq Khan, has already warned that a deep fake of him made by an anonymous source and publicised by a third-party social media account nearly led to serious disorder in late 2023.

What’s the solution?

In the UK, the Home Secretary, James Cleverley, recently aired his concerns that the forthcoming general election faces significant AI risk, but the UK’s approach depends on voluntary engagement with the technology sector as opposed to any pre-election regulation. Many of the biggest technology companies have agreed to pioneer voluntary methods to prevent the misuse of AI in elections, but their focus is typically on detecting and labelling potentially deceptive AI content – potentially based on monitoring by political parties themselves – rather than banning and removing it. One of the points our panel discussed is that by flagging political AI generated content, political parties and social media platforms could add fuel to the fire and raise questions about the boundaries of accuracy, free speech and differing versions of truth.

While this may seem alarmist, there’s already significant evidence that deep fakes were used to attempt to shift the electorate in multiple directions in Taiwan’s elections, the first of many ballots this year in January.

What does this mean for your communications strategy?

The risk of political use of generative AI is another important reason why major organisations must firm up their approach to understanding and working with the technology in 2024. This will be particularly important in the highly regulated sectors we work in, like net zero issues and healthcare. Both are already subject to intense and often misleading political debate, and can be extremely emotive to different parts of the electorate, so may well find themselves in the crosshairs of those seeking to use AI to influence elections.

What’s the bright side? Well, as one Madano team member said, we’re passionate fans of democracy. Used correctly, and responsibly, generative AI has the potential to expand the ability for citizens to creatively participate in democratic life, and communicate about their democratic views and rights. We hope that this is the future that emerges this year.

At Madano, we’re working with businesses to develop and advise on insights, analytics and tools to harness AI. Contact us at [email protected] if you’d like to find out more.