Core Concepts
The algorithmic fairness and social welfare approaches to fairness in high-stakes decision-making are fundamentally different, and their optimization criteria can be incompatible.
Abstract
The content discusses two distinct approaches to defining and measuring fairness in algorithmic decision-making:
The algorithmic fairness approach, which focuses on ensuring statistical parity or independence between the algorithm's decisions and protected group identities (e.g., race, gender). This approach prioritizes a pre-defined notion of group fairness by constraining the algorithm's output.
The social welfare approach, which evaluates fairness from the perspective of an individual behind a "veil of ignorance" - choosing how to structure society before knowing one's own identity. This approach aims to maximize an aggregated social welfare function that reflects risk aversion over individual utilities.
The content demonstrates that these two approaches can lead to incompatible optimal algorithms, even in a simple example. The algorithmic fairness approach may prioritize statistical parity, while the social welfare approach may prefer an algorithm that perfectly correlates decisions with group identity to maximize the expected utility of the most disadvantaged group.
The authors propose a more general framework that nests both approaches, where the designer maximizes a function of the algorithm's accuracy, subject to a measure of unfairness. This unfairness measure can capture both the statistical notions of group fairness and the social welfare considerations of risk aversion over individual utilities.
The key insight is that the algorithmic fairness and social welfare approaches to fairness are fundamentally different, and cannot be easily reconciled. Designers must carefully consider which notion of fairness is most appropriate for their context and objectives.
Stats
There are no key metrics or figures used to support the author's arguments.
Quotes
There are no striking quotes supporting the author's key logics.