First, let's try to place our feet in some relatively firm ground with a bit of realism:
When people say they do artificial intelligence (AI), they're usually talking about machine learning (ML), or maybe just statistics.
Any claim to "AI" in a system lies on a spectrum from the simplest forms of automated decision-making, such as an if-then logic statement, to the as-yet unrealised realms of superhuman cyborg intelligence. Where you want your tech to stand on that scale, and which flavours of AI you wish to deploy in your work, are not what matters. As with any technology, it's about learning to use it as and when it suits the specific needs of the job.
We can think of any autonomous system we use as if it is a colleague in a new kind of multidisciplinary team. In a healthy setup, the functions that the autonomous system performs can complement our own and amplify our output. But for this to work we must cultivate productive human-machine relations between ourselves and the tools we deploy.
Viewed this way, the modern worker, team and organisation are part of a human-machine system that might benefit from any of the following:
Whether or not you have access to people with experience in the above areas, there is a lot to be said for learning the basics yourself anyway. The underlying skills of artificial intelligence are statistics (for analysing data) and maths (for understanding algorithms), along with coding (for writing and editing the system) – all of which are increasingly valuable and transferrable skills in the modern world.
A good human-machine collaborator is comfortable hacking systems together in whatever form is appropriate to do the job, and in working with (potentially very messy) data sets. By developing an attitude within the organisation of care for data and tools, every team member can become a steward of cleaner, more interrogable data sets, and an evolving set of digital systems to power the business.
<aside> 💡 AI Ethics – should you build it?
We have not dived into the potential ethical issues of AI here. The choice to use – or not use – any technology comes loaded with ethical values that should be considered before acting. This is not exclusive to AI but there are novel and important ethical issues that arise as we hand over increasing levels of agency to autonomous agents.
For a three-minute intro to some of the core issues in AI ethics, you might like this video from Neoco founder Ben Bland (and some of his best friends): Future: Pending. And for a deeper dive, there are many different guides to AI ethics but Ethically Aligned Design, by IEEE, is a good start (and again related to Ben's work, in this case with IEEE ethics standards).
</aside>
Well, yes and no.
The automation of job functions is an age-old process yet also just getting started in some ways, with a lot of uncertainty about how it will play out. The one thing that seems consistently true so far is that previous predictions have been way off. What was originally thought to be safely in the realm of human ability and impossible for machines in the foreseeable future has been solved by machines in the meantime (e.g. driving a car or assessing an X-ray for pathologies) while the reverse has also been true, where machines have performed badly what were predicted to be easy tasks. This is Moravec's Paradox.
The paradox of predicting and planning the curve of automation adoption has led us to a point where machine automation is now set to replace human work at various points across the entire spectrum of employment, not just at the "low end" of unskilled labour. Lawyers and doctors may be just as exposed as delivery agents and cleaners. But while everyone should consider the potential for losing their job to a robot in future, mass automation does not necessitate the loss of jobs overall. It may instead produce openings for new kinds of work, such as in fields in which computers are currently weak (e.g. empathy, creativity and lateral thinking) or in increasingly collaborative roles between humans and machines (e.g. data engineering).
A world reshaped by digitalisation
Automation and new technologies are creating new jobs and demand new skills, but also removing the need for people to do some tasks.
Nearly 14% of jobs in OECD countries are likely to be automated, while another 32% are at high risk of being partially automated – so nearly 1 in 2 people is likely to be affected in some way. Young people and those with low skills are those at highest risk – but new technological developments are now also affecting the jobs of the high-skilled too.
And while the platform economy is creating new opportunities, there are concerns about the quality of some jobs there.
– From What is the future of work? by OECD.
Despite the argument above that redundancies from job automation might be accompanied by a parallel increase in new kinds of jobs – a process that has been going on for centuries – we may yet be near the end of the road. In the past, economic growth was accompanied by proportionate increases in income and job creation. But in recent years the economy has grown while income has stagnated or shrunk, as computing power has allowed businesses to create more profit with less people. Academics Erik Brynjolfsson and Andrew McAfee named this The Great Decoupling.
The great paradox of the Second Machine Age [...] is that even though we’ve had record levels of wealth creation, employment hasn’t kept up.