How Explainable Artificial Intelligence Could Lower the Effect of Biased Algorithms

Explainable AI bias

We usually expect computers to be more unbiased than humans. However, over the past several years, we’ve witnessed multiple controversies over Machine Learning-enabled systems producing biased or discriminatory results. In 2016, for example, ProPublica reported that ML algorithms U.S. courts use to gauge defendants’ likelihood of recidivism were more likely to label black defendants as high risk compared to white defendants from similar backgrounds.

The increasing role of Machine Learning (ML) in decision-making systems, from banking to bail, presents us with an opportunity to build better, less biased systems or run the risk of reinforcing these problems, explains VentureBeat.

Some people may call these biases “racist algorithms.” However, the problem isn’t the algorithms themselves, it’s the data fed to them. For example, collecting data from the past is a common starting point for data science projects — but “[historical] data is often biased in ways that we don’t want to transfer to the future,” says Joey Gonzalez, assistant professor in the Department of Electrical Engineering and Computer Science at the University of California at Berkeley and a founding member of UC Berkeley’s RISE Lab.

Read more Embracing Artificial Intelligence and Machine Learning in Healthcare

This is where explainable Artificial Intelligence (AI) could come in. Humans might be able to correct for bias before it has a serious impact, if they could check in on the “reasoning” an algorithm used to make decisions about members of high-risk groups.

What fuels the machine learning system are the data it learned from. Therefore, it differently from a standard computer program where humans explicitly write every line of code. The accuracy of an ML system could be measured by humans, but visibility into how such a system actually makes decisions is limited.

Explainable AI bias

According to the Venture Beat report, explainable AI asks ML algorithms to justify their decision-making in a similar way. For example, in 2016 researchers from the University of Washington built an explanation technique called LIME that they tested on Google’s Inception Network, a popular image classification neural net. Instead of looking at which of the Inception Network’s “neurons” fire when it makes an image classification decision, LIME searches for an explanation in the image itself. It erases various parts of the original image and feeds the resulting “perturbed” images back through Inception, checking to see which perturbations throw the algorithm off the furthest.

By doing so, LIME can attribute the Inception Network’s classification decision to specific features of the original picture. For example, for an image of a tree frog, LIME found that erasing parts of the frog’s face made it much harder for the Inception Network to identify the image, showing that much of the original classification decision was based on the frog’s face.

Read more Foci Wearable Boosts Your Focus Through The Power of Machine Learning

While these are promising developments, making AI bias-free comes down to one thing, and that is data. Bias is likely to occur if the data an algorithm is trained on doesn’t fairly reflect the entire population that developers want to serve.

“It is an incredibly hard problem,” acknowledged Gonzalez. “But by getting very smart people thinking about this problem and trying to codify a better approach or at least state what the approach is, I think that will help make progress.”

Previous articleLG’s New Exoskeleton or ‘Wearable Robot’ Promises to Give Users Superhuman Strength
Next articleEven H3 Wireless Hands-On Review: AI-Powered Stylish Headphones, but Have Their Quirks
Sam Draper
Sam Draper () is Online Editor at WT | Wearable Technologies specialized in the field of sports and fitness but also passionated about any new lifestyle gadget on the market. Sam can be contacted at press(at)wearable-technologies.com.