Core Concepts
AI governance requires a sociotechnical perspective to navigate trade-offs effectively.
Abstract
The content discusses the importance of a sociotechnical perspective in understanding and navigating trade-offs in AI governance. It focuses on the formal trade-offs between predictive accuracy and fairness, and between predictive accuracy and interpretability. The author argues against a prevalent interpretation that directly correlates these trade-offs with tensions between underlying values. Instead, they emphasize the need to consider validity, compositionality, and deployment dynamics to bridge the gap between formal trade-offs and practical impacts on values. The discussion highlights the complexities of decision-making in AI-based systems, emphasizing the importance of interdisciplinary collaboration and a broader understanding of the sociotechnical context.
Directory:
- Introduction
- Formal trade-offs in responsible AI governance
- Importance of normative engagement in AI governance
- Trade-offs in AI-based Decision-Making
- Accuracy-fairness and accuracy-interpretability trade-offs
- Prediction-based decision-making and model optimization
- Sociotechnical Perspective on Interpreting the Trade-offs
- Validity and relevance considerations
- Compositionality in decision-making systems
- Deployment dynamics and long-term implications
- Discussion
- Expanding normative engagement and challenges
- The role of interdisciplinary collaboration in responsible AI governance
- Conclusion
- Importance of a sociotechnical perspective in navigating trade-offs effectively
Stats
"This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI)—between predictive accuracy and fairness and between predictive accuracy and interpretability."
"Ensuring fairness (in some statistical sense) might necessitate relinquishing the opportunity of deploying more predictively accurate models."
"The most accurate models might be “blackboxes” that lack interpretability (in some sense), whose deployment can threaten values that interpretability is said to support."
Quotes
"In many cases, we can’t have them all, as interventions that realize some will sacrifice others."
"Taken together, these considerations form a sociotechnical framework that could guide those involved in AI governance to assess how, in many cases, we can and should have higher aspirations than the prevalent interpretation of the trade-offs would suggest."
"The relation between formal model-level properties and the corresponding values is not a straightforward one, and involves many assumptions."