Why We Need to Lie to AI

David Dylan Thomas
3 min readAug 22, 2020

--

Magic 8-ball toy showing the answer “Don’t count on it.”

Amazon had what they thought was a great idea. Create a bot to help figure out whom to hire. There was only one problem. It kept recommending men. In fact, if it saw the name of a women’s college on an application, it demoted it.

How did an AI, this bastion of impartiality, become so sexist?

Because it was told to.

When Amazon investigated, one of the first questions they asked, quite logically, was how was the AI trained. See, AI doesn’t just know, out of the box, how to make recommendations. It is not, in fact, intelligent. You have to give it data. Based on that data, and the corresponding outcomes, it tries to predict future outcomes. Give it a bunch of historical data correlating certain factors to recidivism, it’ll make parole recommendations based on similar data for criminals up for parole. (We’ll get back to that one).

So for Amazon’s hiring bot, the whole idea was to look at the previous ten years’ worth of resumes and have it make recommendations accordingly. Guess what most of those resumes had in common? They were dudes.

So, the bot took one look at those resumes, said, “Gee, you sure must like dudes!” and started recommending just dudes. Easy peasy.

This is why we need to start lying to AI.

AI does what you tell it to. We have a myth that the best way to train an AI is to point it at the world we’ve got and then have it make predictions about the future based on the world we’ve got. The problem with that theory is that the world we’ve got is very racist and sexist (not to mention ableist, nationalist, classist, etc.). So its predictions will be very ableist, sexist, etc.

Let’s go back to that parole recommendation algorithm. The company Northpointe designed the COMPAS algorithm to make predictions about the likelihood of criminal recidivism. These predictions were meant to help in decisions ranging from sentencing to bail recommendations to parole. Basically anything where being able to answer the question “If I put this person out in society, what are the odds that they’ll start some shit?” would be helpful. Which, in the criminal justice system, is a lot of decisions. Predictions were supposedly based on data that indicated whether or not a criminal was more or less likely to commit a crime if released. The problem was, the “data” in question, was based on surveys that were inherently racist. They reflected a racist world. As a result, white criminals who had eerily similar records were considered a lower risk than their black counterparts.

There is no end of examples. Image-recognition software that associate women with kitchens more often because the body of photos they’d been trained on largely suggested that correlation. You point a bot at a sexist dataset it will make sexist correlations.

So here’s the thing. If you want AI to help you understand how racist and sexist the world (or your company) actually is by pointing it at that world (or company) and seeing what kind of predictions it makes, have at it. That’s actually a very clever use of AI.

If, however, you want AI to make predictions that lead to a more equitable world, you need to lie to it.

You need to tell it that black people are more likely to get housing loans than white people.

You need to tell it that women are more likely to make higher salaries than men.

You need to only show it photos of women as scientists and CEOs.

You need, before you show it anything, to decide on the world you want.

David Dylan Thomas is the author of Design for Cognitive Bias. You can learn more about having him blather on about design, bias, and social justice at your organization or next event at daviddylanthomas.com

--

--

David Dylan Thomas
David Dylan Thomas

Written by David Dylan Thomas

Big fan of treating people like people. Author, Design for Cognitive Bias. Founder, CEO, David Dylan Thomas, LLC. Speaker, Lots of Places.