Core Concepts
Diverse stakeholders involved in public sector AI systems face communication challenges that can lead to misinterpretation and misuse of algorithms, with critical implications for impacted populations. Key communication patterns include: 1) Developers play a predominant role but lack guidance from domain experts, 2) End-users and policy-makers lack technical skills to interpret system limitations, and 3) Citizens are structurally absent from the algorithm life-cycle.
Abstract
The study investigates the communication processes and challenges between diverse stakeholders involved in fairness-related decisions for public sector AI systems. Through semi-structured interviews with 11 practitioners, the researchers identified key elements of the communication patterns, including the roles, tasks, skills, and phases of the algorithm life-cycle.
The main findings are:
Developers play the most prominent role throughout the algorithm life-cycle, but they lack guidance and input from stakeholders with advisory and policy-making roles, as well as domain expertise. This can lead to developers taking on extra roles they are not equipped for.
End-users and policy-makers often lack the technical skills to interpret the limitations and uncertainty of the AI systems, and they rely on developers to make decisions about fairness issues. This can result in misunderstanding and misuse of the algorithms.
Citizens, who are the data subjects impacted by the AI systems, are structurally absent from the decision-making processes throughout the algorithm life-cycle. This may lead to fairness-related decisions that do not adequately consider the perspectives of affected populations.
The researchers constructed a conceptual framework to model the key elements and relationships in the communication patterns, which can be used to identify and analyze the challenges. The framework covers Actors, Roles, Skills, Tasks, Information Exchange, and Phases of the algorithm life-cycle, and can specify when elements are "missing" in the communication process.
Overall, the study highlights the need for better governance structures, skill development, and stakeholder collaboration to address fairness issues in public sector AI systems.
Stats
"We don't know if governments and municipalities can understand the model." - P3
"There should be more focus on asking users what policy-makers perceive as risks and biases" - P8
"it depends on the type of AI. If it has an impact on citizens or uses a lot of data from citizens, it would be relevant to include a focus group of citizens from the beginning but it is less relevant for e.g. road repairs." - P5
Quotes
"the technical colleagues give advice when the model is good enough, but it's a bit of a grey area. We also rely on literature and on the technical teams' judgment." - P2
"Training for users is needed, to remind users not to rely on the tool but that the decision is up to them." - P6
"There is no direct citizen participation." - P4