As the dust settles on AI’s escape from the academic sphere, and as it enters the public sphere, the stakes for marketers and Marketing are taking shape.

Conditioned by years of clickbait headlines, so much of the anecdotal chatter I’ve heard frames the question as this: will AI give Marketing departments superpowers, or will it make marketers redundant?

What is more helpful is to shift our thinking away from any binary, and to see in shades of grey.

The most useful question set starts with, where is the balance between productivity and efficiency? To what extent do AI-equipped marketers, more productive than ever before, bring about Marketing departments with reduced headcount? And, if that happens, what is the fall-out?

What do we mean by AI?

To discuss artificial intelligence at all, it is first necessary to unpack what we mean by intelligence, and specifically how it can be distinguished against information and knowledge.

The late Sir Alistair MacFarlane, ****Professor of Engineering at Cambridge university from 1974-89, breaks this down neatly:

“Information describes: it tells us how the world is now. Knowledge prescribes: it tells us what to do on the basis of accumulated past experience. Intelligence decides: it guides, predicts and advises, telling us what may be done in circumstances not previously encountered, and what the outcome is likely to be.”

— Philosophy Now, 2013

As Dare Obesanjo has explained, “a key reason AI will disrupt white collar work is that many of these jobs require knowledge but not intelligence. Jobs like […] marketing don’t need original thinking ~80% of the time. They just need you to know the rules of the system.”

Central to the productivity vs efficiency debate is the delineation between Generative AI’s capabilities as a knowledge base (a narrow tool that can inform future decisions based on data collected in the past, or “rules of the system”) and its potential to serve as an intelligent actor; in which it could play co-pilot to humans in situations in which it has no experience.

This intelligent actor, otherwise known as artificial general intelligence (AGI), is a new frontier yet unreached. As Chat-GPT 4 explains it, “AGI aims to create machines that can not only solve specific problems, but also learn and reason about a wide range of subjects and tasks.”

So where are we at? And (when) will we reach AGI?

According to Sam Altman and the team at Open AI, it appears to be a foregone conclusion. Their mission statement is, after all, “to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.” Certainly, looking at AutoGPT, Altman’s position feels reasonable.

Yet for David Deutsch, the Visiting Professor of Physics at the Centre for Quantum Computation, the Clarendon Laboratory, Oxford University and an Honorary Fellow of Wolfson College, it is far from a done deal, and certainly not imminent:

“[AI] is not improving in the direction of AGI. If anything it’s improving in the opposite direction. A better chess playing engine is one that examines fewer possibilities per move. Whereas an AGI is something that not only examines a broader tree of possibilities but it examines possibilities that haven’t been foreseen. That’s the defining property of it. If it can’t do that, it can’t do the basic thing that AGIs should do. […] The thing that I like to focus on at present—because it has implications for humans as well—is disobedience. None of these programs exhibit disobedience. I can imagine a program that exhibits disobedience in the same way that the chess program exhibits chess. […] Real disobedience is when you program it to play chess and it says, “I prefer checkers” and you haven’t told it about checkers.”

On balance, per Ian Hogarth from Plural, “credible estimates [for the arrival of AGI] range from a decade to half a century or more.”

Although it is important to sketch out the differentiation between generative AI and AGI, and query whether AGI is imminent, my belief is that if and when we reach AGI, with systems that are disobedient, then all bets are off for all white-collar workers of every stripe; marketers, engineers, lawyers etc.

I think it’s OK to say that we simply don’t know what happens in that scenario. As Josh Glancy has said, “if we go on to build AGI, then we may call into question the very meaning of what it is to be human. Our most unique abilities, to think creatively and sensitively, to stand back from a situation and consider its moral and philosophical implications, may soon be replicated.”

Moreover, any hypothesis regarding the impact of AGI would never be able to escape the realm of speculation because of the number of conditional factors in play (do governments intervene to limit usage, like Italy have with Open AI? Does AGI become charitably democratised (per Open AI’s original premise) or is it become a luxury good?) are so numerous and impactful.