A magazine where the digital world meets the real world.
On the web
- Home
- Browse by date
- Browse by topic
- Enter the maze
- Follow our blog
- Follow us on Twitter
- Resources for teachers
- Subscribe
In print
What is cs4fn?
- About us
- Contact us
- Partners
- Privacy and cookies
- Copyright and contributions
- Links to other fun sites
- Complete our questionnaire, give us feedback
Search:
Is your healthcare algorithm racist?
by Paul Curzon, Queen Mary University of London
Algorithms are taking over decision making, and this is especially so in healthcare. But could the algorithms be making biased decisions? Could their decisions be racist? Yes, and such algorithms are already being used.
There is now big money to be made from healthcare software. One of the biggest areas is in intelligent algorithms that help healthcare workers make decisions. Some even completely take over the decision making. In the US, software is used widely, for example, to predict who will most benefit from interventions. The more you help a patient the more it costs. Some people may just get better without extra help, but for others it means the difference between a disability that might have been avoided or not, or even life and death. How do you tell? It matters as money is limited, so someone has to choose. You need to be able to predict outcomes with or without potential treatments. That is the kind of thing that machine learning technology is generally good at. By looking at the history of lots and lots of past patients, their treatments and what happened, these artificial intelligence programs can spot the patterns in the data and then make predictions about new patients.
This is what current commercial software does. Ziad Obermeyer, from UC Berkeley, decided to investigate how well the systems made those decisions. Working with a team combining academics and clinicians, they looked specifically at the differences between black and white patients in one widely used system. It made decisions about whether to put patients on more expensive treatment programmes. What they found was that the system had a big racial bias in the decisions it made. For patients that were equally ill, it was much more likely to recommend white patients for treatment programmes.
One of the problems with machine learning approaches is it is hard to see why they make the decisions they do. They just look for patterns in data, and who knows what patterns they find to base their decisions on? The team had access to the data of a vast number of patients the algorithms had made recommendations about, the decisions made about them and the outcomes. This meant they could evaluate whether patients were treated fairly.
The data given to the algorithm specifically excluded race, supposedly to stop it making decisions on colour of skin. However, despite not having that information, that was ultimately what it was doing. How?
The team found that its decision-making was based on predicting healthcare costs rather than how ill people actually were. The greater the cost saving of putting a person on a treatment programme, the more likely it was to recommend them. At first sight, this seems reasonable, given the aim is to make best use of a limited budget. The system was totally fair in allocating treatment based on cost. However, when the team looked at how ill people were, black people had to be much sicker before they would be recommended for help. There are lots of reasons more money might be spent on white people, so skewing the system. For example, they may be more likely to seek treatment earlier or more often. Being poor means it can be harder to seek healthcare due to difficulties getting to hospital, difficulties taking time off work, etc. If more black people in the data used to train the system are poor then this will lead to them seeking help less, so less is being spent on them. The system had spotted patterns like this and that was how it was making decisions. Even though it wasn't told who was black and white, it had learnt to be biased.
There is an easy way to fix the system. Instead of including data about costs and having it use that as the basis of decision making, you can use direct measures of how ill a person is: for example, using the number of different conditions the patient is suffering from and the rule of thumb that the more complications you have, the more you will benefit from treatment. The researchers showed that if the system was trained this way instead, the racial bias disappeared. Access to healthcare became much fairer.
If we are going to allow machines to take healthcare decisions for us based on their predictions, we have to make sure we know how they make those predictions, and make sure they are fair. You should not lose the chance of the help you need just because of your ethnicity, or because you are poor. We must take care not to build racist algorithms. Just because computers aren't human doesn't mean they can't be humane.