A magazine where the digital world meets the real world.
On the web
- Home
- Browse by date
- Browse by topic
- Enter the maze
- Follow our blog
- Follow us on Twitter
- Resources for teachers
- Subscribe
In print
What is cs4fn?
- About us
- Contact us
- Partners
- Privacy and cookies
- Copyright and contributions
- Links to other fun sites
- Complete our questionnaire, give us feedback
Search:
Flash crash: the risks of replacing city traders with robots
On May 6th 2010 at 2.35pm a 'flash crash' occurred. $880 billion were wiped off the US stock market in 10 minutes. It was the biggest 1-day fall the stock market has ever seen. Luckily it was soon followed by the biggest ever 1-day rise. Exactly what triggered it, no one knows, but the fact that 'robots' not humans were doing most of the trading certainly played a role.
Why do we have stock markets? It's all to do with getting businesses going. Suppose you want to set up a new high-technology start-up company. You have the ideas but you need money to turn them into reality. One way to get it is to sell shares in the company. You might sell say 100,000 shares at $10 a piece. They are bought by people who think you will be successful. They each then own a small part of your company. If it does do well then others will want to buy in and their share of your company will become worth more than just the $10 they paid. Of course if they actually want the money they will have to sell their shares. That is what the traders are for. They do the actual buying and selling.
This is where Dave Cliff of Bristol University enters the story. Not long ago the traders were people, but now 95% of trades involve a robot. Strictly, robot traders aren't actually robots despite the name - they are just programs that make decisions about buying and selling shares - they do 'algorithmic trading'. Dave Cliff wrote the first algorithmic trading program. Called ZIP it aimed to trade in exactly the way humans do and he made it freely available. Around the same time Steven Gjerstad and John Dickhaut wrote a similar program called MGD. Eventually someone decided to do a proper test of how good these programs were by comparing them to human traders. Both robo traders did consistently better than the humans. Suddenly lots of major players paid attention and, as it was freely available, they started developing their own algorithmic trading programs based on Dave Cliff's code. After all if you use robo traders instead of humans then you can get rid of your expensive salary budget. That together with the fact that they do the job better anyway means lots more profit for you. Very quickly trading became a robot's world.
So what happened in the flash crash? The first thing to realise is it was not a fluke. Since then there have been similar crashes in a variety of other markets including gold and silver. Earlier this year one company's issue of shares lost all its value in a matter of seconds - less time than it took the first champagne cork to hit the floor as the shares were launched! All this shouldn't have been a surprise. Dave Cliff had predicted flash crashes would be a problem as robots were used more and more.
To really understand the flash crash though we need to understand something about the sociology of technology: we need to understand the way people behave. Complicated technology is inherently risky. We've seen this with bridges collapsing, nuclear accidents and space shuttles exploding, for example. There are always things that seem to be able to go wrong that no one thought of in advance. This is especially likely when the technology interacts with the behaviour of people. We are very good at doing things no one imagined anyone would ever do. The problems that lead to disasters are more subtle than that though. There is a particularly nasty effect called the 'Normalization of Deviance'...and it always seems to get us in the end.
To understand it let's suppose you breed a mutant rabbit. You know you must keep it both fed and warm. You guess at a temperature of 20 degrees and give it 3 carrots a day. It's fine. What you don't know (because its a one-of-a-kind mutant rabbit) is if any other temperatures and amount of food would be ok. Maybe if it's well fed it can take a wider range of temperatures. If fed less, maybe the temperature has to be just right. Or maybe it's the opposite. You know there is a boundary outside which it all goes wrong, but don't know exactly where that boundary is. This is known as the 'safe operational envelope'. It's ok, though because if you stick with 20 degrees and 3 carrots all will be ok.
One day though you are away most of the day and when you get back you see the temperature has been much higher. It's 25 degrees. Disaster! You check and the rabbit is fine though, so you think - "Ahh! So now we know up to 25 degrees is ok too" - the envelope has expanded. Another day you run out of carrots so can only give 1. Again the rabbit is fine. Over time as you go more and more outside the original boundary, the envelope gets bigger and bigger. It doesn't seem such a problem after all if you don't get it exactly right. The rabbit is resilient. Why worry? Then one day you run out of carrots just as the temperature shoots up. At only 1 carrot and 25 degrees it turns out that the rabbit dies! The disaster happens after all.
This is essentially what happened over the Challenger and Columbia Space Shuttle disasters. The Challenger disaster was ultimately caused by a seal breaking. One of the essential problems, that the seal distorted, was known about. It had happened several times and the shuttle had been fine, so people didn't worry. Then in 1986, with the temperature much lower than previous launches, Challenger was launched...and it exploded. In 2003, during Columbia's launch an insulating tile fell off. It was known this was a problem but it had happened in several previous launches and the flights had been ok. So why worry. Columbia's mission continued without the problem being fixed. It exploded on re-entry.
Robo traders are also inherently complicated. It is hard to predict exactly how they interact with people in a complex financial system. Worse the programs make decisions extremely quickly. With all of them following the same basic rules any problem is likely to be magnified. If something bad does happen it's likely to happen faster and be worse than if there were just slow humans involved.
The real crunch though is when the normalization of deviance hits this complex system. People started using the robots and nothing went wrong. As a result they have been given more and more control until now they are doing most of the trades. The safe envelope of their use has been stretched and stretched. All it takes for a flash crash to happen is for the conditions to be slightly different in the wrong way to anything seen before.
Dave Cliff argues that we need to take the flash crash as a warning. If we take robot traders for granted they are likely to bite us. One day a flash crash could help bring the whole financial system down. To stop this we need a much better understanding of what is going on. We need to gain a deep understanding not only of how the robots interact with each other but also how they interact with humans. That means we need to do lots of large-scale experiments using realistic computer models of the major financial systems. Right now we don't have a clue what actually triggered the 2010 flash crash. We really rather urgently need to know.
This article is based on a lecture given by Dave Cliff of Bristol University in June 2012.