Please, rotate your device

Contact
Demo

Can AI be racist?

By Davey Gibian, June 12, 2019

AI systems are only as good as the data they analyze.

AI systems are only as good as the data they analyze. If underlying data is biased then the choices and decisions made by the AI will also be biased. If data is biased, in say policing data or loan issuance data, then AIs deployed to automated these processes may magnify underlying human and societal biases rather than eliminating them.

The musician asks the billionaire

Musician and philanthropist Will.I.Am sat with Bill Gates on stage and asked, “What will AI look like in the ghetto?” Bill apparently hadn’t really thought of that. He sat back in his chair, looking a bit puzzled. As Will.I.Am explained his question, he began talking about what would happen to minority, poor, and marginalized communities if AI were used for policing? What would policing look like if police forces used AI to predict crime or allocate policing resources?

Will.I.Am’s question cuts to the heart of AI’s bias problem. An AI’s decision making is only as good as the underlying data.

Its been well documented that certain police forces in the US target marginalized communities far more than white, well-off ones. If an AI were to examine this data and suggest an optimized resource allocation for the police force, it might well choose to over-police poor, minority, or marginalized communities because that is where more arrests happened previously. The AI would therefore only be reinforcing underlying human biases in policing today, instead of making optimal judgements and ethical decisions.

Data biases need to be thoroughly interrogated prior to deploying an AI system. This is especially true when the data itself was not initially built to be used for AI predictions and decision making. Simply slapping an AI layer over a faulty data source will not suddenly make the data better. Instead, the AI will reinforce the underlying biases by learning from the source data.

Racist police robots may be far off, but software used to send arrestees to jail is already in use. And it is displaying racist tendencies. ProPublic discovered that software designed to predict the likelihood that an arrestee would re-offend incorrectly flagged black defendants twice as likely as white ones. Black men are often re-arrested at high rates, but this is in large part of existing racial profiling and other judicial inequities in the system. So the algorithm was operating as designed but the underlying data led to algorithmically biased results.

Algorithmic bias has far reaching impacts, many of which could potentially touch on racial issues. In financial services, algorithms deciding on approving lines of credit may discriminate against minorities due to the neighborhoods they live in. This practice is illegal—known as “redlining”—but unless an AI is specifically trained to not violate regulatory and legal requirements this practice could continue and companies could claim ignorance of their AI’s “blackbox”.

If you would like to learn more about how to test your AI systems for potential bias, please contact us.

Do you trust your AI?

Get in touch to learn how we can identify your systems’ vulnerabilities and keep them secure.