Enter the maze

How Who's fat just walks away

In the episode of hit BBC series Dr Who 'Partners in Crime' the Doctor and Donna Noble finally meet up (again) in a strange adventure involving fat, a space nanny and lots and lots and lots of cute aliens. In the show the Doctor and Donna are both investigating Adipose Industries, a company that has created a new diet pill which is advertised as making 'the fat just walk away'. In reality the pill works by causing the fat cells in the person to convert into tiny living alien creatures called the Adipose. These creatures bud out of the body, normally when the person is asleep, and go off to join their other Adipose brothers and sisters under the care of alien nanny Miss Foster, who is posing as the head of the company. In one of the most memorable scenes the streets of London are covered with thousands of tiny Adipose alien children running down the street as they rush to be taken up to the newly arrived mother ship. On screen each Adipose looks like it has its own unique personality. They all move in a slightly different way, and get up to various antics...but how did the special effects technicians make this massive crowd scene possible?

Who How?

The answer is that they used a special effects animation package called Massive, which stands for Multiple Agent Simulation System in Virtual Environment. This software has been responsible for some memorable big screen movie moments. It was first developed in New Zealand to allow the impressive battle scenes in The Lord of the Rings, but soon it was applied to rooms full of robots in the film I Robot, and even to dancing penguins in Happy Feet.

Doctor Who was the first time Massive had been used in television. The Adipose, as well as looking cute, were also intelligent characters. Rather than animating the thousands of individual characters by hand, the software used computer science artificial intelligence techniques that allow each character to interact with the others around it in an individual way. Rather than just reacting to the nearby movements of other characters near by and using a fixed set of behaviours, as was the basis of earlier 'particle-animation' packages, Massive characters have 'brains'.

See hear

Animated characters created by Massive have simulated senses of sight and hearing, can 'feel' the environment around them, and can also be given specific roles to play. Their brain is a network of 7 or 8 thousand logic rules that define what a character does and how they do it. The characters can 'see' what is around them as each is able to use a simple image created from their point of view in the environment, a bit like each character has a TV camera fixed on its head. Each character can see what you would see if you were in the same place and if one character can't see another character, then it won't react to it.

The software also allows the characters to hear. For example, in the Lord of the Rings battles Orcs emitted a particular sound (a frequency) and Elves emitted a different frequency. So the Orcs could hear nearby Orcs and the Elves could hear nearby Elves and act accordingly. The 'sounds' faded away the further you were from the character making them. They also changed depending on what the character was doing, just like real life.

A crude sense of touch was also included. Characters knew when they came into contact with things in the environment and they changed what they were doing in line with this. For example a character would change posture to climb up a slope, or jump over an obstacle.

What next?

When the characters sense the world around them they need to be able to react properly. Massive uses master characters to do this, like lead actors in the scene these characters were able to cue the other characters about what to do. A master character is programmed to be able to carry out around two hundred main actions. These main actions can be created from the process of motion capture, where real actors are recorded playing out the actions a director might expect in the scene, running, falling, jumping and so on, and a computer vision system turns these human activities into animation blueprints to allow the characters to recreate them.

Along with these master characters' main actions, each character has a set of around 300 smaller individual actions to choose from, like scratching their nose or tripping over their feet. Characters at different times will be doing different actions depending on what they can sense in the environment, and what the master characters near by are doing. A clever bit of software called a motion blending engine takes the individual actions, like the action of walking up a slope, the hearing of a shout from the left, and the action of waving, and combines these all together to give the correct overall motion.

Back in

Using high speed computers the special effects technicians can quickly create examples of the group scenes. The director can then fine tune the action by hand using a series of sliders in the characters' 'brains' to change the action in case the intelligent characters haven't got it just right themselves. This is because the characters brains work using a method called fuzzy logic. Normal logic says that something is true or false. It can't be anything else, and so the characters would need to be running or not running. In fuzzy logic the control is more like a slider that can go from true to false through stages in between, so characters can be 'sort of running' which lies somewhere between running and not running. This fuzzy 'brain' gives the characters far more variety, and the director much greater control of the digital cast.

Finally the Massive characters can be put into a scene with human actors. This is a process called digital compositing. The characters need to be the right scale to slot into the previously filmed footage, both in the foreground and also the background further away. Since the filmed footage and the animated characters are both digital it's a simple case to replace the appropriate live footage pixels with the character pixels, and suddenly our characters are there in the live action.

It's possible then to finish off by adding things like smoke in the atmosphere, or ensure that shadows from any live action are also cast on the animated characters.

So that's how they do it. The Adipose convincingly walk, run, skip and jump their way home through the streets of London thanks to some clever special effects software, a fuzzy brain, and the brainy brain of episode writer Russell T. Davies.