Using Causal Reasoning To Guide Algorithms Toward a Fairer World

Apr 5, 2020News

Learning algorithms, which are becoming an increasingly ubiquitous part of our lives, do precisely what they are designed to do: find patterns in the data they are given. The problem, however, is that even if patterns in the data were to lead to very accurate predictions, the processes by which the data is collected, relevant variables are defined, and hypotheses are formulated may depend on structural unfairness found in our society. Algorithms based on such data may well serve to introduce or perpetuate a variety of discriminatory biases, and thereby maintain the cycle of injustice.

For instance, it is well known that statistical models predict recidivism at higher rates among certain minorities in the United States [1]. To what extent are these predictions discriminatory? What is a sensible framework for thinking about these issues? A growing community of experts with a variety of perspectives is now addressing issues like this, in part by defining and analyzing data science problems through the lens of fairness and transparency, and proposing ways to mitigate harmful effects of algorithmic bias [2-7].

Adjusting for biases in data for the purposes of learning and inference is a well-studied issue in statistics. Take, for example, the perennial problem of missing data when trying to analyze polls. If we wish to determine who will win in a presidential election, we could ask a subset of people for whom they will vote, and try to extrapolate these findings to the general population. The difficulty arises, however, when one group of a candidate’s supporters are systematically underrepresented in polls (perhaps because those supporters are less likely to trust mainstream polling). This creates what is known as selection bias in the data available from the poll. If not carefully addressed in statistical analysis, selection bias may easily yield misleading inferences about the voting patterns of the general population. Common methods for adjustment for systematically missing data include imputation methods and weighting methods. Imputation aims to learn a model that can correctly predict missing entries, even if missing due to complex patterns, and then “fill in” those entries using the model. Weighting aims to find a weight that quantifies how “typical’’ a fully observed element in the data is, and then use these weights to adjust parts of the data that are fully observed to more closely approximate the true underlying population.

Alternatively, causal inference problems occur when we want to use data to guide decision-making. For example, should we eat less fat or fewer carbohydrates? Should we cut or raise taxes? Should we give a particular type of medication to a particular type of patient? Answering questions of this type introduces analytic challenges not present in outcome prediction tasks. Consider the problem of using data collected using electronic health records to determine which medications to assign to which patients. The difficulty is that medications within the hospital are assigned with the aim of maximizing patient welfare. In particular, patients who are sicker are more likely to get certain types of medication, and sicker patients are also more likely to develop adverse health outcomes or even die. In other words, an observed correlation between receiving a particular type of medication and an adverse health event may be due to the poor choice of medication, or it may be due to a spurious correlation due to underlying health status. These sorts of correlations are so common in data, that they led to a common refrain in statistics: correlation does not imply causation.

The original article can be found here.

Regarding to AI and Causality, AI World Society (AIWS.net) has created a new section on Modern Causal Inference in 2020. This section will be led by Professor Judea Pearl who is one of the pioneers to mathematize causal modeling in the empirical sciences. Professor Judea’s work will contribute to Causal Inference for AI transparency, which is one of important AIWS topics to identify, publish and promote principles for the virtuous application of AI in different domains including healthcare, education, transportation, national security, and other areas.