Skip to Content

Articles

USA TODAY: AI bias: How tech determines if you land job, get a loan or end up in jail

AI bias: How tech determines if you land job, get a loan or end up in jail

BY DALVIN BROWN - 10/02/2019

Businesses across almost every industry deploy artificial intelligence to make jobs simpler for staff and tasks easier for consumers. 

Computer software teaches customer service agents how to be more compassionate, schools use machine learning to scan for weapons and mass shooters on campus, and doctors use AI to map the root cause of diseases.

Sectors such as cybersecurity, online entertainment and retail use the tech in combination with wide swaths of customer data in revolutionary ways to streamline services. 

Though these applications may seem harmless, perhaps even helpful, the AI is only as good as the information fed into it, which can have serious implications.

You might not realize it, but AI helps determine whether you qualify for a loan in some cases. There are products in the pipeline that could have police officers stopping you because software identified you as someone else.

Imagine if people on the street could take a photo of you, then a computer scanned a database to tell them everything about you, or if an airport's security camera flagged your face while a bad guy walked clean through TSA.

Those are real-world possibilities when the tech that’s supposed to bolster convenience has human bias baked into the framework.

"Artificial intelligence is a super powerful tool, and like any really powerful tool, it can be used to do a lot of things – some of which are good and some of which can be problematic," said Eric Sydell, executive vice president of innovation at Shaker International, which develops AI-enabled software. 

"In the early stages of any new technology like this, you see a lot of companies trying to figure out how to bring it into their business," Sydell said, "and some are doing it better than others."

Artificial intelligence tends to be a catch-all term to describe tasks performed by a computer that would usually require a human, such as speech recognition and decision making. 

Whether it's intentional or not, humans make judgments that can spill over into the code created for AI to follow. That means AI can contain implicit racial, gender and ideological biases, which prompted an array of federal and state regulatory efforts.

Criminal justice 
 
In June, Rep. Don Beyer, D-Va., offered two amendments to a House appropriations bill that would prevent federal funds from covering facial recognition technology by law enforcement and require the National Science Foundation to report to Congress on the social impacts of AI.

Click here to read the full article