IE 11 is not supported. For an optimal experience visit our site on another browser.

Artificial intelligence will replicate the human biases we don't acknowledge having

If we struggle to understand the unfairness of the world, the computers we program will share the same fate.
Image:
We're replicating ourselves — for good and, often, for bad.Just_Super / Getty Images/iStockphoto

The idea of shifting decision-making to algorithms and artificial intelligence is a compelling one: It can, so the theory goes, remove the inconsistencies and prejudices of human decision-making, and make a fairer world.

Life-changing decisions are inevitably affected by our human biases and frailties. One Israeli study, for instance, found that, at the beginning of the day, judges granted parole in two-thirds of cases, granted close to zero just before the lunch break and then granted more again after lunch, once the judges had eaten and taken a break.

So if we can’t even tackle bad decision-making because we’re hungry, how do we even begin to tackle deeper-seated biases? Shifting away from a world so affected by human fallibility therefore seems compelling, but it comes with huge dangers that could entrench existing divides and failures.

Shifting away from a world so affected by human fallibility seems compelling, but it comes with huge dangers that could entrench existing divides and failures.

Silicon Valley, where much of this technology is created, is overwhelmingly white, overwhelmingly rich and overwhelmingly male — and full of people excited about the potential of the technologies they’re creating. What they’re not good at is thinking about the unintended consequences of such technologies in the wild, or testing for real world complications, inconsistencies or damaging effects. The result has been rollouts of social networks that immediately compromise journalists’ sources, facial recognition which works far less well for non-white and non-male faces and many more defective bits of tech.

These consequences become far more severe when algorithms are making more significant decisions, as they already do. Algorithms play a role in US bail and sentencing — even trying to predict “future criminals” — which, as ProPublica demonstrated, displays racial biases against black Americans. AI could even bring back racial red-lining on loans and mortgages, this time with an air of scientific solidity.

But even AI systems that are built for seemingly much simpler problems than many others we face can have broad unintended consequences.

But even AI systems that are built for seemingly much simpler problems than many others we face can have broad unintended consequences.

Take Google’s experimental AI assistant “Google Duplex”, unveiled last week, which appeared able to convincingly mimic human speech — including hesitations, “um”s and “ah”s — to make restaurant reservations or hair appointments.

To the company, it was a demonstration of a technology that adds a full new level of convenience to users, especially millennials who may be averse to using the phone to make appointments. To the service industry, it spelled disaster: Customers making reservations and then failing to show is one of the banes of an already-struggling industry.

Google’s new assistant could make this intractable problem even more intractable, even for the restaurants that disabled online booking as a deliberate effort to dissuade no-shows.

We can already build AIs that the creators of which cannot always determine why it made the choices it did.

Others had concerns around the deceptive nature of the technology: The AI did not identify itself as such, and appeared to take deliberate steps to act human. This could have numerous negative effects, from enabling automated harassment with minimal effort, to confusing callers when the AI fails, to damaging businesses.

Inevitably, within days of the demonstration Google announced that Duplex would identify itself as an AI caller on future calls, as it was rolled out. But it’s telling that the product got so far through product development, reaching the public demonstration stage, without anyone already having answers to such concerns.

But we struggle to understand the world as it is: Every event has multiple causes, and they all link with one another. Income and wealth inequality intersect with race, and both intersect with class, and educational prospects, and dozens of other factors. How do we know how much of any given event is influenced by each, and how do we work out how to make fair decisions based on any of this?

If creators work off the thoroughly flawed world in which we live in now that we already do not understand, the potential grows exponentially for systemically-biased decisions by the AI that become impossible to unravel.

These questions are incredibly difficult to answer, but there is a real danger that, if we rush past trying to do so, we will build algorithms and then more complex AIs that are based on an unequal world we already don’t comprehend. We can already build AIs that the creators of which cannot always determine why it made the choices it did. In 2016, for instance, Google trained an AI to play the game Go and then it confounded its creators by making a move none of them (or accompanying grandmasters) could comprehend — and went on to beat the world’s best player, in part thanks to that move. These will become increasingly common.

If creators work off a starting position — the thoroughly flawed world in which we live in now — that we already do not understand, the potential grows exponentially for systemically-biased decisions by the AI that become impossible to unravel. Worse yet, unlike human decision-making in which we can easily see flaws, AIs and algorithms come with an air of unchallengability. An elaborate system seemingly extensively designed and tested to be fair has come up with this ruling, the thinking may go, so thus it’s unclear who has standing to challenge its results

Technology giants built social media, which has now reshaped our public conversation and our politics — and left politicians and regulators around the world to work out responses to control their power and make them work fairly. Many of the same companies are now rapidly researching AI. Governments will need to act much more quickly this time.

James Ball is an award-winning journalist and author based in London. His journalism has appeared in the Guardian, the Washington Post, BuzzFeed, the Daily Beast and numerous other outlets.