Skip to content

UW Professor Anirban Basu Addresses ‘Algorithmic Discrimination’ With New Study

Seattle, WA – University of Washington Professor Anirban Basu, the Stergachis Family Endowed Director of the CHOICE Institute, has published a paper in Science Advances on a controversial topic regarding whether variables such as race and ethnicity belong in clinical prediction algorithms.

Basu said the impetus for the paper was motivated by concerns raised by several federal and state committees about “algorithmic discrimination” as an unintentional outcome of how big data technologies and structures are used and can potentially encode discrimination in automated or humanistic decisions.

“This work tries to answer to what extent can the inclusion or exclusion of race or ethnicity variables as predictors or features in developing clinical algorithms induce algorithmic discrimination,” he said.

These questions, and the framework presented in this paper for thinking about this issue, readily extend to all types of machine learning and AI algorithms, even outside health.

“Race is a social and not a biological construct,” said Basu. “This distinction matters because certain arguments against using Race in developing clinical prediction models invoke this notion of Race not being a biological feature.”

Basu noted that often researchers take a utilitarian approach that aims to maximize a (positive) outcome in the population, irrespective of who can do so, suggesting allocating resources to those with the most opportunities to generate outcomes or utilities.

“Such an approach would suggest that race/ethnicity variables should always be included in algorithms,” Basu added. “However, incorporating any notion of fairness in such predictions would likely result in a different prescription.”

For this study, Basu applied the normative Equality of Opportunity (E.O.) framework, broadly used in many other fairness contexts in social sciences and law, and especially used by the U.S. Supreme Court for several landmark rulings.

“This is the first time anyone will apply this framework to hold machine learning and other artificial intelligence algorithms to the same standard of equity and, in the process, answer whether race should be included in these algorithms under ideal and real data conditions,” he said.

Basu’s work follows two main ex ante E.O. principles: 1) inequality of outcomes is unethical if it arises due to differences in immutable circumstances. Such unethicality can be remedied by compensating individuals with disadvantaged circumstances, giving them the same opportunity to generate outcomes; and 2) inequality in outcomes arising from differential effort across individuals within each level of circumstances is not a moral bad (i.e., acceptable). Two individuals with the same circumstance should be rewarded differentially to preserve differences in their expected outcomes. The work takes these principles to evaluate algorithms and categorize them as 1) diagnostic algorithms (outcome already realized at the time point when decision-making happens using the predictions) and 2) prognostic algorithms that predict future outcomes relative to the time point of decision-making.

Basu shows that in ideal and practical settings, failure to include race corrections will propagate systemic inequities and discrimination in diagnostic models and specific prognostic models that inform decisions by invoking an ex-ante compensation principle. In contrast, including race in prognostic models that inform resource allocations following an ex-ante reward principle can compromise equal opportunities for patients from different races.

“To support these arguments, I use simulations studying how an established algorithmic discrimination metric is affected under different versions of algorithms that include race or not,” said Basu. “In these simulations, I also study issues around measurement errors in outcomes, features, and race. Each has its own set of biases, but none changes the basic conclusions of the paper, based on the E.O. approach.”

 

***

Media Inquiries: Contact Scott Braswell at braswels@uw.edu.