Digital News

How we use (and don't use) AI in our work

Written by Grace McLean & Magdalene Karipidou, Digital

As digital marketers, we’ve always been curious, sometimes even restless, when it comes to experimenting with new tools and technologies – it’s part of the way we work. But over time, we’ve also learnt something else: not every tool, no matter how shiny or sophisticated, is right for every task (or every client), and the real skill lies in knowing the difference.

That perspective matters now more than ever, because generative AI is changing the way marketing is delivered. It can be tempting to let algorithms take the lead, particularly when the promise of speed and efficiency is dangled in front of us. Yet in the sectors we work in – life sciences, energy, education, housing – the stakes are too high to leave everything to machines. Here, trust, compliance and accuracy are non-negotiable.

Which is why, at Madano, we’ve set clear boundaries around how and where we use AI – and just as importantly, where we don’t.

Why we don’t let AI decide who sees what

A big part of our job is making sure campaigns reach the right audiences at the right time – and in B2B that isn’t just about job titles or demographics. It’s about understanding groups of stakeholders: how they think, what influences them, and how they talk about the issues that matter to them.

We often use AI tools to help with this – scraping the web to find where conversations are happening, who is engaging with them, and what language they use when they discuss specific topics or products. That helps give us a broad picture of the landscape.

But that’s only the starting point. The decisions about which audiences to prioritise, what success should look like for each, and how to reach them are made by people, not algorithms.

This is why we don’t hand targeting over to platform AI tools such as LinkedIn Accelerate. While those tools can be useful for optimising cost-per-click or conversion, they’re designed to maximise platform-defined metrics – impressions, form-fills, downloads. In our world, that’s rarely the full story.

In highly regulated and reputation-sensitive sectors, “success” often looks different: raising awareness among professional groups, building credibility with policymakers, or creating trust around complex issues. In healthcare, for example, reaching the wrong audience with promotional material risks breaching compliance rules. In energy, a poorly framed message could undermine credibility with regulators. these are not risks we delegate to an algorithm, and while AI can support the process, it cannot carry that responsibility.

AI can give you the ‘what’ but not the ‘so what’

AI has undoubtedly transformed how quickly we can access and process vast datasets, but speed isn’t the same as strategy.

Tools can surface trends, but they can’t determine which of those trends actually matter in the context of a client’s goals, market position, or stakeholder landscape. A sudden spike in conversation may look important, but without context it could just be noise.

As Curtis, Research & Project Director within our Insights practice, puts it, “AI can help us more efficiently process data, but when it comes to crafting a compelling narrative that resonates with our stakeholders, that’s still where we, as consultants, lead the way.”

We don’t just report the ‘what’, we define the ‘so what’. Making that leap requires context, empathy and judgement – the kinds of qualities no algorithm has yet shown it can replicate.

Why final creative will always stay human

Across the creative industry, AI is already part of the background process – used to prototype visuals, storyboard ideas, or mock up pitch imagery. We do the same. It helps us move faster from abstract concept to something tangible, and it accelerates collaboration internally and with clients.

But when it comes to the final creative – the images, messages and symbolism that shape a campaign – we keep that firmly in human hands.

For us, creative isn’t decoration. It’s persuasion. In the kinds of strategic, values-driven campaigns we lead, creative can influence behaviour, shift perspectives, and shape reputations – all of which require nuance, accountability and cultural intelligence.

As our Design Director, Kieran Sturt, puts it: “AI can help spark ideas, but it’s our curiosity, creative judgement, and craft that shape the stories that stick.”

An image that resonates in London could be tone-deaf in Berlin – or misinterpreted entirely in Singapore. AI doesn’t understand those subtleties. Representation matters. Language matters. And in our work, getting it wrong isn’t just a missed opportunity but a credibility risk.

Our position is clear

Our 2024 Orange Paper, Fool’s Gold message still stands. The role of our industry is to shape emerging technologies thoughtfully, not simply cheerlead them.

That means:

  • Using AI where it adds value i.e., speeding up research, surfacing patterns, and unblocking early creative exploration.
  • Not using AI where trust, compliance and cultural nuance are at stake.
  • Continuing to upskill ourselves, so we understand both the power and the limitations of this technology.

It’s incumbent on us all within the industry to upskill, be more knowledgeable and ensure not only effective and efficient use of AI, but protection from the erosion of truth and trust. Responsible marketing isn’t about rejecting AI but about knowing exactly where to draw the line.

If you would like to discuss further how your brand should be using AI and what tools are available, please get in touch.

×

Search madano.com