Created by Materia for OpenMind Recommended by Materia
4
Start How to Rid Artificial Intelligence of Human Prejudice
13 November 2019

How to Rid Artificial Intelligence of Human Prejudice

Estimated reading time Time 4 to read

In 2015, Google had to issue an apology when its photo app mistook a black user for a gorilla—revealing that algorithms are not free of bias, despite their apparent impartiality. The incident brought attention to a problem which had been slowly developing for years. Already in 2009, asiatic users reported that a digital camera was asking them not to blink when they took a photograph with their eyes open. The camera was sold by Nikon, a japanese firm.

It’s not surprising that artificial intelligence (AI) programs have inherited society’s biases, conscious or not, given that they are trained on human decisions and on datasets collected by people. The problem is that algorithms have a sheen of neutrality which can obscure insidious social prejudice. In 2018, for instance, Amazon had to scrap an AI recruiting tool trained on job applications which the company had received over a ten-year period—their program, true to the tech industry’s track record, was showing bias against women.

Many facial recognition programs have a racial bias. Credit: NIST

Gemma Galdon, founder and director of the consulting firm Eticas, recalls a case where one person changed from a male to a female name on InfoJobs, an online job bank. The career site’s algorithm started listing traditionally female jobs with less pay, despite there being no change to the candidate’s qualifications. “All companies are required to follow an equal-opportunity strategy: if they only employ men, they can be sued,” Galdon says: “Why isn’t InfoJobs also forced to check for bias in its algorithm?”

Artificial intelligence already takes decisions (or aids decision-making processes) in finance, in the judicial system, in healthcare and in national security, dealing out unfair treatment to social minorities. Eradicating prejudice baked into these programs has become a matter of urgency. For some digital services, the solution appears straightforward: if a facial recognition algorithm struggles to discern black and asian faces, for instance, the training dataset probably needs greater diversity.

However, problems which run deeper have no clear-cut answers. This year, a study published in the journal Science revealed that an important triage algorithm used in the United States underestimated the medical needs of black patients. Surprisingly, it did this with no prior knowledge of their ethnicity, because it was programed to assign patients risk scores based on their healthcare costs. As it happens, the United States spends less money, on average, on black patients’ health.

Why are robots prejudiced?

In this case, the developers chose healthcare cost because they thought it was a suitable, unbiased proxy for the actual healthcare needs, but they got it wrong. In other words, they had trained the algorithm to correctly solve a poorly defined problem. “There are several reasons why an algorithm may be biased or unfair. First, because it reflects society, reproducing the unfair dynamics which already exist. It could also be that the training data are skewed or the problem has been incorrectly defined. Or it could be that the engineers add in their own prejudice. We have realised that every algorithm with social impact is biased,” says Galdon.

Several recruitment and job search algorithms have proved sexist biases. Source: MaxPixel

The first challenge is to pinpoint unfair decisions, in order to stop them and to avoid them being repeated in the future. Nicolas Kayser-Bril, a reporter for the non-profit AlgorithmWatch, is an expert at identifying biased tech. “You can recognise them by old-fashioned eye discrimination, but it’s hard to prove bias,” he says. Part of the problem is that AI code is usually inscrutable, like a black box—even its creators aren’t privy to the decision-making process.

“When I know the type of algorithm being used, I know which questions to ask,” says Kayser-Bril. “In the case of machine learning, I want to know which data training set was used”. At this point, his efforts are usually foiled by the lack of transparency from developing companies and institutions. “There is AI designed to detect bias in other AIs, but as long as the algorithm is proprietary, we, the journalists, can’t analyse it,” he explains. For Kayser-Bril, the situation is akin to a sanitary inspection at a restaurant. “You want to find out if the restaurant is being run hygienically, so you can look at the dishes that are being served, but what you really want to do is go into the kitchen to inspect how the dishes are made,” he says.

Nipping bias in the bud

Galdon’s company does visit clients’ “kitchens” to help them solve and avoid bias in their algorithms. “We audit the machine learning, but that only accounts for a small part where we find problems, perhaps 20%,” she says. The remaining 80% is more “substantial” work, Galdon claims, which involves analysing the entire decision chain, from the definition of the problem and the technologists’ assumptions to the specific training provided for end-users, those who will use the algorithm in a job with social outcomes.

Technologists involuntarily perpetuate stereotypes and prejudices of society. Source: MaxPixel

According to the experts, teams developing artificial intelligence must strive for greater social and professional diversity—this will be key to building a fairer future for the field. In the United States, only 2.5% of Google’s workforce is black. Furthermore, technologists rarely work with social scientists to ensure their assumptions are sound before they launch a project. But Galdon says times are changing: “Engineers encounter problems which they are aware they have no training for, and our intervention is always welcome,” she says, referring to her company’s consulting services.

Beyond computer programs which result in outright discrimination, there is an overarching inequity problem in the tech industry. In short, algorithms are only developed for those who can afford them, setting up unequal power dynamics from the get-go. Taryn Southern, director of the neurotechnology film I Am Human, told the online portal Big Think that brain-machine interfaces designed to make us “smarter, better, faster” reflect the “Western bias to favor productivity and efficiency”. Why assume everyone shares those values? “Perhaps in other Eastern cultures they would orient the use of an interface to induce greater states of calm or create more empathy”, Southern suggests.

Bruno Martín

@TurbanMinor

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved