Wednesday, March 30, 2022

by Tom Snee

Machine learning algorithms play an ever-larger role in peoples’ lives, from approving mortgages and credit cards to getting a job interview to what advertisements we see in our social media feeds.

But analysts have discovered those algorithms can be unfair, discriminating against people based on race, gender, health conditions, and a range of other factors. A team of University of Iowa researchers has received an $800,000 grant as part of an initiative with the National Science Foundation and Amazon to make machine learning algorithms less discriminatory.

“Machine learning is used to make many high stakes decisions, but it often discriminates against people who have protected characteristics,” said Qihang Lin, associate professor of business analytics in the Tippie College of Business and the co-lead primary investigator on the grant, with Tianbao Yang, primary investigator and associate professor of computer science in the College of Liberal Arts and Sciences. “We want to help make sure those decisions won’t be discriminatory.”

Machine learning is the process of programming an algorithm to analyze enormous amounts of data, so it learns how to do tasks it’s programmed to do. As more data is added, the algorithm learns more about its task and changes how it does things as a result, “learning” as it goes, as a human would respond to learning new things.

However, Lin said algorithms can learn discriminatory things based on the data. For instance, an algorithm might conclude that Black people are less susceptible than white people to a certain illness based on data showing fewer Black people are tested for the illness than white people or are hospitalized with the illness less often. But what the algorithm wasn’t told was that fewer Black people have access to health care, so even if they have the illness, they are less likely to see a doctor.

“The model thinks fewer Black people than white people have a disease because they are healthier, when it’s actually a sign of reduced access to health care resources,” Lin said.

Yang and Lin will use the three-year grant to follow up on research already underway to define fairness and look at different risk measures to help business managers find a balance between fairness and risk when making health care decisions.

Mingxuan Sun of Louisiana State University is the project’s third co-primary investigator.

MEDIA CONTACT: Tom Snee, 319-384-0010 (o); 319-541-8434 (c); tom-snee@uiowa.edu