A magazine where the digital world meets the real world.
On the web
- Home
- Browse by date
- Browse by topic
- Enter the maze
- Follow our blog
- Follow us on Twitter
- Resources for teachers
- Subscribe
In print
What is cs4fn?
- About us
- Contact us
- Partners
- Privacy and cookies
- Copyright and contributions
- Links to other fun sites
- Complete our questionnaire, give us feedback
Search:
Predicting Spurs' results
Did you think Brazil or Argentina were going to win the 2006 World Cup? Or maybe you were a true believer and it was England's time for glory? Those teams didn't made the semis. There must be at least 20 million people in the UK who fancy themselves as experts at predicting the results of football matches. The bookies increased profits suggest that those brave enough to put their money where their mouths are, aren't very accurate with their predictions. In fact, even if you do some very fancy statistical analysis about information about previous matches as the basis for your predictions, you will still not be good enough to beat the bookies.
It turns out that a method developed to help avoid software development fiascos can also beat the Bookies. This story actually starts back in 1995 when Norman Fenton, a Professor at Queen Mary and colleague Martin Neil were grappling with the problem of how to predict the number of bugs left in a software system under development. Getting rid of all bugs in today's massive computer systems is virtually impossible. Most of the effort of writing software is actually in trying to get rid of all the bugs, but eventually you have to stop. Before relesaing new software, you would like at least to be confident that any bugs left are so obscure they will not be a problem for your customers. Even though you do not know exactly what bugs are left, a way of predicting how many there might be would give confidence that you wouldn't have a potential multi-million pound fiasco on your hands. The trouble is traditional statistical ways of predicting this are not accurate enough for the kind of systems they have to assess.
Fenton's team were looking for a better way. Eventually they found that a new method, called Bayesian networks, enabled them to combine objective information like numbers of bugs already found with much more subjective data like the 'skill of the coders'. The results they got for predicting bugs were so amazing that their paper about it is in the top 1% most cited papers in computer science. The models were built into a commercial tool called AgenaRisk that has since been used successfully by companies like Philips and Motorola.
So how did Spurs help?
Back in 1995 the researchers were not so sure about how effective Bayesian networks could be. It was believed that you got the best results with Bayesian networks when you built your model based on the knowledge of a real expert in the subject you want to make predictions about. Being a Spurs fanatic, Norman Fenton thought that was where his real expertise lay. So he set about building a simple Bayesian net model to predict Spurs results based around the things he thought were key to their success or otherwise: things like the combination of certain players and positions, the quality of the opposition, and the venue. He built the model in late 1995 and tested it on the subsequent matches. As it depended on particular players like Teddy Sheringham and Darren Anderton its predictions were only good for two season 1995-96 and 1996-1997. One of the unique things about Bayesian net models is that they do not make firm predictions. Instead they produce a probability for everything that is uncertain or unknown: they predict the chance of it happening (as do bookies). So, for each game, the model produced the probabilities for win, draw and lose. He found that the predictions were not only accurate but provided enough information to beat the bookies hands down, even allowing for their 'mark-up'. This is because the probabilities enable you to spot situations where the bookies' odds are in your favour.
The Bayesian net model was 'fixed': its structure was set by the expert so it could not learn from new results. A different approach is to use programs called 'machine learners' that automatically learn by being fed lots of data - spotting patterns. These Machine learning models aren't given any expert wisdom but are constantly updated in the light of new results. From the two seasons there were 76 Spurs results to learn from. A series of different machine learners were fed a mass of information about each match (far more information than was used in the expert model). The researchers assumed that the machine learners would eventually outperform the expert model after enough results had been 'learnt'. The team were in for a suprise though: none of the machine learners came close to outperforming the expert model. It seems that human expert-built Bayesian network models provide better predictions than those of methods based on pure data analysis alone.
So after their success with Spurs, the research team, with their spin-off company Agena, went on to exploit this power of Bayesian networks with great success in applications ranging from air traffic management to predicting operational risk in banks.
So, if you want to predict your own teamÕs results in the coming season (even taking account of possible food poisoning scams!) Bayesian networks could be the answer.