Enter the maze

Understanding Ultron:

A Turing test for world domination

by Peter McOwan, Queen Mary University of London

An Ultron-like Robot: Image by DrSJS from Pixabay 320276

Avengers: Age of Ultron is the latest film about robots or artificial intelligences (AI) trying to take over the world. AI is becoming ever present in our lives, at least in the form of software tools that demonstrate elements of human-like intelligence. AI in our mobile phones apply and adapt their rules to learn to serve us better, for example. But fears of AI's potential negative impact on humanity remain as seen in its projection into characters like Ultron, a super-intelligence accidentally created by the Avengers.

But what relation do the evil AIs of the movies have to scientific reality? Could an AI take over the world? How would it do it? And why would it want to? AI movie villains need to consider the whodunit staples of motive and opportunity.

Motive? What motive?

Let's look at the motive. Few would say Intelligence in itself unswervingly leads to a desire to rule the world. In movies AI are often driven by self preservation, a realisation that fearful humans might shut them down. But would we give our AI tools cause to feel threatened? They provide benefits for us and there also seems little reason in creating a sense of self-awareness in a system that searches the web for the nearest Italian restaurant, for example.

Another popular motive for AIs' evilness is their zealous application of logic.In Ultron's case the goal of protecting the earth can only be accomplished by wiping out humanity. This destruction by logic is reminiscent of the notion that a computer would select a stopped clock over one that is two seconds slow, as the stopped clock is right twice a day whereas the slow one is never right. Ultron's plot motivation, based on brittle logic combined with indifference to life, seems at odds with todays AI systems that reason mathematically with uncertainty and are built to work safely with users.

Opportunity Knocks

When we consider an AI's opportunity to rule the world we are on somewhat firmer ground. The famous Turning Test of machine intelligence was set up to measure a particular skill - the ability to conduct a believable conversation. The premise being that if you can't tell the difference between AI and human skill, the AI has passed the test and should be considered as intelligent as humans.

So what would a Turing Test for the 'skill' of world domination look like? To explore that we need to compare the antisocial AI behaviours with the attributes expected of human world domination. World dominators need to control important parts of our lives, say our access to money or our ability to buy a house. AI does that already - lending decisions are frequently made by an AI sifting through mountains of information to decide your credit worthiness. AIs now trade on the stock market too.

An overlord would give orders and expect them to be followed. Anyone who has stood helplessly at a shop's self-service till as it makes repeated bagging related demands of them already knows what it feels like to be bossed about by AIs.

Kill Bill?

Finally, no megalomaniac hollywood robot would be complete without at least some desire to kill us. Today military robots can identify targets without human intervention. It is currently a human controller that gives permission to attack but it's not a stretch to say that the potential to auto kill exists in these AIs, but we would need to change the computer code to allow it.

These examples arguably show AI in control in limited but significant parts of life on earth, but to truly dominate the world, movie style, these individual AIs would need to start working together to create a synchronised AI army - that bossy self-service till talking to your health monitor and denying selling you beer, then both ganging up with a credit scoring system to only raise your credit limit if you both buy a pair of trainers with a built in GPS tracker and only eat the kale from your smart fridge but after the shoe data shows you completed the required five mile run.

It's a worrying picture but fortunately I think it's an unlikely one. Engineers worldwide are developing the Internet of things, networks connecting all manner of devices together to create new services. These are pieces of a jigsaw that would need to join together and form a big picture for total world domination. It's an unlikely situation - too much has too fall into place and work togethre. It's a lot like the infamous plot-hole in Independence Day - where an Apple Mac and an alien spaceship's software inexplicably have cross-platform compatibility.

Our earthly AI systems are written in a range of computer languages, hold different data in different ways and use different and non-compatible rule sets and learning techniques. Unless we design them to be compatible there is no reason why adding two safely designed AI systems, developed by separate companies for separate services would spontaneously blend to share capabilities and form some greater common goal without human intervention.

So could AIs, and the robot bodies containing them, pass the test and take over the world? Only if we humans let them, and help them a lot. Why would we?

Perhaps because humans are the stupid ones!