Since the dawn of machinery and the first flickerings of computer technology, humanity has been obsessed with the idea of artificial intelligence (AI) – the concept that machines could one day interact, respond and think for themselves, as if they were truly alive.
Every year, the possibility of an ‘intelligent technology’ future becomes more and more of a reality – as algorithms and machine learning improve at a lightning-fast rate. According to experts across the globe, machines will soon be capable of replacing a variety of jobs – from writing bestsellers, to composing Top 40 pop songs and even performing open-heart surgery!
However, the biggest questions remain: how long until that point and how did we get to here?
The origins of AI
When attempting to chart the future, it’s always essential to consider the past. While the idea of ‘artificial intelligence’ had been speculated about in fiction for centuries – as far back as Mary Shelley’s Frankenstein or Karel Čapek’s R.U.R. (Rossum’s Universal Robots) – it was not until Alan Turing in 1950 that the concept of AI first became more than a fantasy.
Most famous as the man behind the Enigma code-breaking machine during the Second World War, the English computer scientist and mathematician spent his time post-war devising the Turing Test. Basic but effective in nature, the test involves seeing if artificial intelligence can hold a realistic conversation with a human being, thereby convincing them they are also human.
Forming the background to AI measurements ever since its introduction in Turing’s paper, it was only in 2014 that a Russian-designed chatbot programme, Eugene, was able to successfully convince 33% of human judges. Turing’s original test suggested that over 30% was a pass – but clearly there is plenty of room for improvement in the future.
The evolution of AI
From the time of Turing’s Test, AI was limited to basic computer models – with MIT professor John McCarthy coining the phrase ‘artificial intelligence’ in 1955. While working at MIT, he created an AI laboratory where he developed LISP (full list processing), a computer programming language for robotics designed around offering expansion potential as technology improved in the future.
Despite some base model machines showing promise – from the ‘first robotic person’, Shakey the Robot, in 1966, to anthropomorphic androids WABOT-1 and WABOT-2 from Waseda University – the field of AI started to plateau in the 1980s. It wasn’t until Rodney Brooks in 1990 that the idea of computer intelligence would be revitalised.
In his seminal 1990 paper, Elephants Don’t Play Chess, Brooks suggested that the robotics field had been approaching the idea of artificial intelligence all wrong. Instead of creating machines that could carry out ever-more advanced singular ‘top-down’ tasks – from playing the piano to calculating maths problems – AI should be a machine-based relationship with the world around it, or ‘bottom-up’.
It might sound obvious to us now, thanks to a lifetime rooted in the advances of AI, but back in the early 1990s, the suggestion that artificial intelligence should be reactive to its surroundings was revolutionary.
The future AI job market
One of the biggest ‘bottom-up’ advances for artificial intelligence is the ability to be intuitive in planning and responding to tasks. Perhaps the biggest breakthrough in this regard came in 2016 when AlphaGo, a custom program developed by Google’s DeepMind AI unit, beat the world’s best ‘Go’ player.
The historical Chinese board game had long been seen as one of AI’s greatest challenges, the sheer variety of possible moves demanding players evaluate and react in countless different ways to each turn. That a program was finally able to challenge this level of ‘humanity’ was a real breakthrough, even more than IBM’s Deep Blue over chess champion Garry Kasparov in 1996.
Because of the leap forward in intelligence, experts from across the globe now predict we will see an AI program be able to win the World Series of Poker in just two short years. Not only that, but the same reactive technology is currently being investigated by the banking sector – with Natwest’s ‘Cora’ chatbot in particular tipped to replace all telephone banking by 2022.
What about other job sectors? Are they too under threat from the advancement of artificial intelligence? Well, recent research from survey company Gartner suggests that 85% of customer interactions in retail will be AI-managed by 2020. The other 15%, mainly the human sales process, will take a fair while longer – with 2031 the closest estimate for full replacement.
What can be done?
Because automation has crept into modern society so slowly, it can be extremely difficult to predict how the job market will evolve as it gets ever more advanced. Perhaps the biggest challenge will be ensuring ‘artificial intelligence’ does not lead to the mass wipe-out of several job sectors – almost certainly requiring new legislation to be passed, as well as a rethink of the employment market overall.
However, we have already seen shifts to incorporate the digital-driven advances in a variety of sectors, from banking to farming and beyond. Many predict that learning new skills early will be crucial for any affected sector – which looks set to be many of them. In short, the only way to beat the machines is to join them – or, at the very least, know how to use them.
James Tweddle, AI vs Humanity
James has had a keen interest in the advances of artificial intelligence since being introduced to the Terminator films at an early age. While he doesn’t think Judgement Day will happen, he stays abreast of the latest AI news, just in case.
Twitter: @jamestweddle / Instagram: jamestweddle