toplogo
サインイン

Designing for Community Stakeholders’ Interactions with AI in Policing: Understanding Algorithmic Crime Mapping


核心概念
Algorithmic crime mapping impacts human-AI decision-making and requires community engagement for ethical design.
要約

The study explores algorithmic crime mapping's impact on human-AI decision-making, emphasizing the need for community engagement. It delves into stakeholders' interactions with an intuitive AI application, highlighting differences in perspectives and needs based on background. The analysis includes participants' mental workload assessment during the interaction.

Abstract:

  • Research focuses on algorithmic crime mapping in criminal justice.
  • Experiments with 60 participants explore interactions with ADS.
  • Findings reveal stakeholder feedback on AI design and use.

Introduction:

  • Predictive policing using ADS supplements conventional practices.
  • Concerns arise regarding trustworthiness of data sources.
  • Calls for human-centered systems to complement workers' expertise.

Methods:

  • Mixed-methods study conducted in a mid-sized U.S. city.
  • Interactive crime-mapping application developed based on KDE algorithm.
  • Participants grouped as community members, technical experts, and law enforcement agents (LEAs).

Results:

  • Community members question motivations behind crime mapping.
  • LEAs exhibit anchoring bias, relying heavily on first map presented.
  • Participants experiment with different maps; LEAs overestimate hotspots.
  • Mental workload assessment shows higher scores for LEAs in mental demand and temporal demand.
edit_icon

要約をカスタマイズ

edit_icon

AI でリライト

edit_icon

引用を生成

translate_icon

原文を翻訳

visual_icon

マインドマップを作成

visit_icon

原文を表示

統計
Participants found the task mentally demanding as all three groups scored high on mental demand (MD). LEAs had higher average weighted MD scores than community members and technical participants. LEAs spent more time interpreting the map, resulting in higher scores for temporal demand (TD). LEAs and technical participants scored high on performance (PF), indicating confidence in their results. Community members scored medium range on frustration level (FT), suggesting some uncertainty about their results.
引用
"I think this is like the idea where you’re putting the police where they can make the most money versus actually control crime..." - C1, Community Member "It’s kind of a feedback loop... Are you really helping?" –T5, Technical Participant "Oftentimes, we take a look at our calls for service... That determines how many officers we would have out at first." –L8, Law Enforcement Agent

抽出されたキーインサイト

by MD Romael Ha... 場所 arxiv.org 03-20-2024

https://arxiv.org/pdf/2402.05348.pdf
Are We Asking the Right Questions?

深掘り質問

How can algorithmic crime mapping tools address concerns of over-policing?

Algorithmic crime mapping tools can address concerns of over-policing by incorporating transparency and accountability measures. One way to achieve this is by involving community members in the design and implementation process of these tools. By including diverse perspectives, especially from communities that are disproportionately affected by policing practices, the tool developers can ensure that the algorithms are not reinforcing biases or targeting specific neighborhoods unfairly. Additionally, setting clear guidelines and protocols for how the data will be used and ensuring that there are checks in place to prevent misuse of the technology can help mitigate concerns of over-policing.

What are potential implications of LEAs exhibiting anchoring bias in decision-making?

The implications of Law Enforcement Agents (LEAs) exhibiting anchoring bias in decision-making could lead to a reinforcement of existing beliefs or initial information without considering new evidence or alternative viewpoints. In the context of algorithmic crime mapping, this bias could result in officers relying too heavily on their preconceived notions about certain areas being high-crime zones without critically evaluating the data presented by the tool. This could potentially lead to resource allocation decisions based on outdated or inaccurate assumptions rather than objective analysis.

How might community involvement enhance the ethical design of AI systems beyond policing?

Community involvement can enhance the ethical design of AI systems beyond policing by bringing diverse perspectives into the development process. Community members often have firsthand knowledge and experiences that developers may lack, allowing for a more comprehensive understanding of potential impacts and considerations. By engaging with communities throughout all stages of AI system development, from problem formulation to deployment, designers can ensure that systems align with community values, needs, and priorities. This participatory approach fosters trust, transparency, and accountability while promoting equity and fairness in AI technologies across various domains beyond policing.
0
star