Our chief digital and AI officer, Luke Alexander, reflects on the latest in a series of national and international ‘leadership in AI’ roundtables.
In the last two years I’ve talked to a lot of people about AI. From boardrooms with a handful of people to conference halls and Zoom calls with hundreds.
And what I’ve realised is that, while we might start off talking about AI, before long we’re headed off on a hundred different routes: philosophical, legal, ethical, practical, cultural, ontological.
Some of my discussions have been at our Leadership in AI roundtable series. There’s something about getting some of our cleverest and most engaging clients and friends in one room and drawing the curtain of Chatham House rules around us that makes everyone share more honestly and more thoughtfully than on a public stage.
And in these discussions I see more than ever that sometimes when we talk about AI, what we’re really talking about is… something else.
So to celebrate entering our second year of Four’s Leadership in AI roundtable series, which has now visited London, Cardiff, Riyadh and Dubai, I wanted to share five things that we’re really talking about, when we talk about AI.
Culture and context
We talk a lot about how the world of AI is so fast paced, and the landscape so constantly changing, that it can be close to impossible to keep up. But in our discussions it’s rare for people to feel this is what’s holding them back from AI adoption.
In fact, the number one tangent (or rabbit hole, if you prefer) for roundtable participants is around the way their organisations, governments and societies help or hinder (ok, mostly hinder) their ability to innovate with AI.
And once you touch on this subject, all bets are off. The promise (and in some cases, hype) of AI lets people glimpse a sight of better ways of doing things in their organisations. Why do we have that particular process? Who said only these people get to engage with the customer in this way? Perhaps we don’t need that century-old ritual in 2025?
Moments later, people aren’t talking about particular AI tools or techniques. They’re swept up in a vision of what’s possible if you start to question the built-in assumptions within organisational and social structures. It has, on occasion, been a hugely liberating conversation in our sessions.
It’s particularly interesting when it reaches outside of organisations into countries and cultures. There’s a different perspective, for example, in the Gulf vs the UK, where constant investment allows fast-growing nations to ‘leapfrog’ legacy technologies and ways of working from the UK, EU and US, combined with a very different public attitude towards questions of privacy and data protection.
And the conversation often ends up in one place: the ability (or lack of) of governments to effectively regulate or legislate around AI use. As one participant very clearly put it earlier this year: “if they can’t do it, we have to accept that we’ll be making these decisions ourselves”.
That’s a lot of ethical and legal burden to put on the shoulders of organisations who are “simply not set up to cope with these kinds of questions”.
These burdens are keenly felt by marketing teams, who “often act as a bridge between different parts of an organisation that don’t really want to talk to each other”. In most cases, marketing teams found themselves “the canary in the coalmine”, responsible for testing out new AI tools and advising on their use even outside of their areas of expertise.
This was a view echoed by most participants, although interestingly those representing healthcare companies and organisations felt the opposite, as clinical settings are seeing particularly speedy adoption against a range of diagnostic use-cases.
Ourselves, and what we’re worth
Roundtables often take a little while to warm up, and you’ll get some platitudes and safe opinions from the first couple of people to speak. Our most recent Dubai roundtable, however, went straight in with one of the most controversial statements I’ve heard about AI. “How I think people feel about AI,” said the brave participant, “is a bit ashamed.”
And she was absolutely right. Shame – or something like it – is one of those reflexes that people don’t admit to, but which most people seem to feel when they use AI. Another way to put it: “it feels like cheating”. This theme comes up again and again: “I know that I’m supposed to use it, but I know that other people will look at it and say ‘where is all the hard work’?”
These are real quotes from senior marketers, often leading teams that on any objective measure are ahead of the curve in AI use.
It’s a natural, and almost universal, response, but I don’t think it’s really shame at all. I think it’s a fundamental disconnect between the value people place on their effort, and the value people place on their output.
Ask anyone who’s felt imposter syndrome in their role and they’ll tell you they spend twice as long on everything they produce as a result. If you’re worried about how others will receive your work, you will take more care on it. That’s a natural – and positive! – response.
But the measure of your value isn’t the amount of strain and stress that went into your work, it’s the quality of what you’re producing. And I think AI is giving us an unprecedented opportunity to interrogate that relationship.
And, perhaps, to go beyond and interrogate what we as humans need from each other in the workplace. And recognising that it may not be such a bad thing to do a little less, a little slower, with a little more thought, while the AIs whir away on our behalf.
Delegation and collaboration
One of the big misconceptions around AI is that it requires little by the way of technique.
After all, it’s (in most cases) a chatbot, right? How hard can that be?
Well, very hard! The trick is when we use AI, the kinds of skills people are used to seeing as difficult, such as programming and data analysis, are replaced with ‘softer’ skills like delegating and collaborating.
And the secret is, most people are much worse at those two skills than they think they are. “I’ve noticed,” said one roundtable participant, “that the people who you want to work with, because they are creative, collaborative and fun to be around, get the most out of AI. Because they’re better at delegating.” Others noted that this was a situation where newer entrants to the workplace, despite generally higher levels of technical savvy, were limited by their inexperience in working in collaborative partnerships and understanding how to delegate important work.
I’d go further than this. I think what a lot of people are seeing is actually a distinction between task allocation and delegation. Whereas most people are reasonably good at allocating tasks - giving work they can’t do (because they don’t have the skills or training) to someone who can – people are much worse at delegating. Really skilled delegation involves handing off work that you are capable of to someone else. And for a lot of people that means accepting it will be carried out in a different way, that the quality may not match what they could produce, and so on.
Using an AI assistant like Copilot or ChatGPT pulls this into sharp focus: you have to learn lots of advanced delegating techniques, like knowing how to give a really good brief, learning how to cope with getting a result that, while high quality, isn’t necessarily what you would have expected, and how to interrogate and review a response without simply re-doing it all yourself.
This is hard. And it means that a lot of the conversations about effective use of AI are really about effective structures and processes within organisations and teams that reward effective delegation, reinforce trust and de-risk experimentation.
Why we’re here
Well-run organisations have a clear vision, mission and values. And in the best-run, those align as closely as possible with the needs of their customers, stakeholders and audiences.
As organisations start to use AI more, and that use of AI challenges existing ways of working within the organisation, I’ve seen participants begin to question these visions, missions and values.
Take one basic example. Talking to a participant from an organisation that regularly delivers detailed, complex reports to executive audiences: “If the customer is going to drop the report straight into ChatGPT, then why shouldn’t we just cut out the middle man and give them the AI summary first?”
In this situation, was the value delivered to the customer ever in the scale and scope of the report, or was it in the key findings and insights drawn from it? Did the company feel they had to deliver a product of a certain size and shape because that’s how it was always done? And has AI demonstrated that what they felt the customer needed, and what they actually needed, maybe wasn’t as closely aligned as they had thought?
I’ve heard versions of this from participants in all sorts of industries, in every location. Sometimes it’s a helpful realisation. Other times, it can cause some deep soul-searching.
When we reach this topic, I like to encourage participants to look at it through a particular lens. It is, roughly speaking, easy enough to divide organisational uses of AI into three buckets: assistive (driven by a human, working as a copilot or collaborator – whether through chat, as a semi-autonomous agent, and so on), automation (trusted autonomous uses, integrations into specialist or algorithmic workflows), and transformation. In the first two, the organisation is doing what it has always done, but faster and more efficiently, and perhaps with better results through, for example, personalisation.
In the latter case – the ‘transformation’ bucket – are all those use cases which fundamentally disrupt or re-think the way an organisation or team works and what it delivers to its customers or stakeholders. I ask participants to “imagine you could start the organisation from scratch tomorrow – with no legacy. How would you build it? What would you offer? And how would you design the organisation to deliver it?”
The answer always includes AI. But fundamentally, it’s not about AI. It’s about thinking in a more open-minded way about purpose and practice.
AI
OK. Sometimes, when we talk about AI, we’re actually just talking about AI. While AI use is now pretty much universal among marketers (not the case in our first roundtable at the start of 2024!), it’s rare to find someone who isn’t interested in practical advice on getting the most out of the technology.
People have a huge amount of pride in the hard-fought techniques that they’ve learned or developed to get the most out of this brand-new technology, and I often have to gently stop roundtables from becoming peer-to-peer training sessions.
And you can see why. It’s a fascinating technology. Yes, machine learning is nothing new. But the explosion of model capabilities and useable tools in 2023 really was a total shift in how we think about what technology can do. I love seeing how it is starting to democratise (a word that comes up frequently in our discussions) access to the kind of techniques that were previously inaccessible without learning a programming language or setting up a server.
Sometimes the conversation breaks down a little, because it becomes clear that different people mean different things by the term ‘AI’. From one participant: “we realised our legal team thought AI meant Alexa, and our creative team thought it meant Firefly”.
And I’ve noticed those working in programmatic and other data-led disciplines bristle slightly when someone implies that AI started in 2023.
But whatever each participant’s definition, the concerns are the same. They want to know that the technology is reliable, that it’s tested and checked for bias and hallucinations, that it’s consistent and that guardrails are in place. When it comes to the big foundation models, we are almost all, without exception, ‘takers’ not ‘makers’ of the tech – so while we can lobby and influence its development, our role really is to make sure we are using it in the most appropriate and responsible way.
At that point, those who have a good understanding of how the models work (at a theoretical level) are in the best position. They’re able to articulate – as one participant put it – that “AI is, for now, a true black box – like every person you’ve ever worked with”, and recognise that “the pace of change isn’t just exponential but stepped, with bursts of innovation that change what’s possible on a regular basis”.
This blog post is already far longer than I wanted it to be. But I can’t bring myself to ask Copilot to edit it down. I guess I’m not as good a delegator as I think I am, after all…
To end I’ll just give one more quote from our most recent roundtable: “AI isn’t a tool. It’s a mirror”.
Perhaps that’s the real gift of AI in this moment: not just the efficiency gains or the democratised access to powerful tools, but the permission to step back and ask the bigger questions.
If you’d like help asking those questions, to hear more about our Leadership in AI roundtable series, or talk to us more generally about your organisation’s AI strategy, I’d love to hear from you.
Please drop me a line via email to luke.alexander@four.agency.