Enter the maze

The challenge of the multi-core


Computing is facing its biggest challenge ever. Whether we come through the crisis will depend on whether we can solve a problem that has eluded us for 60 years or more.

So what's the problem? Well, a famous observation, known as Moore's Law is breaking down. It is the observation that the number of transistors, the basic switches that computers are made up of, on new chips had doubled every year and a half. It led to a prediction that the speed of computers would increase in a similar way. That prediction has held good for over 60 years. Since the first computers were built in the 1940s the speed of the best new designs has indeed doubled every year and a half.

Moore's Law isn't a law of nature, more an observation about the ingenuity of electronic engineers and chip designers. Year after year they have found new ways to make transistors smaller, and so pack more of them onto chips, as well as using them in every more clever ways. That has meant the chips have been more powerful every year and as a result the computers have become both smaller and faster. Computers that once would have filled a room now fit in your pocket. Your music player, phone, camera and games console all fit in one small device when once each would have needed a computer of its own. Not only that tasks that once needed supercomputers, now can be done on a home laptop. Without Moore's Law holding true none of that could have happened.

Then in 2003 things started to go wrong. The improvements started to tail off. Moore's Law started to break down.

Up until 2003, as the number of transistors that could fit on a chip increased, the actual performance of the resulting computer increased by a similar amount. After 2003 more transistors were still being packed onto chips but unlike before it was no longer leading to better performance. The chips were just getting more complicated, not faster. The engineers were no longer able to find useful ways to use the extra transistors. Moore's Law was grinding to a halt.

Then in 2003 things started to go wrong.

Does it matter? Well it doesn't herald the end of the world, but if computers are to continue to meet their potential the improvements need to go on. That isn't really the problem though. The engineers have actually come up with a solution and so far it has worked. The solution is the multi-core.

What is a multi-core chip? Well, it's quite a simple idea really. Rather than using all the extra transistors to make ever more complicated processors (the trick that has worked for the last 60 years), instead keep the processors slower and simpler. Then pack lots of those processors on to a single chip. Now as long as the software developers can write their programs so that they use the extra processors.

The multi-core hardware is there. By 2007 8 cores were being packed onto a chip, by 2009 16 and by 2013 we should have multi-core chips with 64 cores. The prediction from the chip designers is that Moore's Law will continue in this new form: the number of cores on a chip will double every 18 months or so. Problem solved! Well no not really. The buck has been passed. Now it is over to the computer scientists. They have to find ways to actually use all the extra power to actually make their software go faster. They have to find ways to write complex software so that it can use all those extra cores. The only trouble is writing software to run on multiple computers is a problem we have struggled to properly solve for 60 years. Solving that problem, the problem of parallelism, is the actual challenge of multi-core.

This article is based on a talk by Fran Allen, the Turing Award winner who spent 45 years working at IBM. The talk was given at Queen Mary, University of London in May 2011.