Conceitos Básicos
The author explores workers' experiences with AI-based decision support tools in child welfare, highlighting the factors guiding their reliance on these systems and the challenges they face in integrating algorithmic predictions into their decision-making processes.
Resumo
The content delves into the use of AI-based decision support tools in child welfare, focusing on workers' practices and challenges with the Allegheny Family Screening Tool (AFST). Workers exhibit a mix of reliance and skepticism towards the AFST, balancing its predictions with their own judgment. The study emphasizes the need for effective human-AI partnerships to enhance decision-making in child welfare contexts.
Key points include:
- Workers rely on rich contextual information beyond AI model captures.
- Beliefs about ADS capabilities influence workers' decisions.
- Organizational pressures impact workers' use of ADS.
- Awareness of misalignments between algorithmic predictions and objectives.
- Challenges faced by workers using AFST for child maltreatment screening.
- Importance of transparency and communication from ADS tools like AFST.
Estatísticas
The AFST has been used for half a decade but remains a source of tension for many workers.
The AFST outputs a score between 1 (low risk) to 20 (high risk).
Call screeners make screening recommendations based on the AFST score and other case-related information.
Citações
"I think […] a dirty house or something like that, I feel like those are often the ones where you’re not sure if it is just a value or moral judgment." - C6
"I look at the score. I often, you know, am in agreement with it. I think it does a good job trying to pull everything together." - C4
"I hate it […] I don’t think it should have a role, period, honestly." - C2