toplogo
Sign In

Analyzing AI Incidents and 'Networked Trouble': Research Agenda


Core Concepts
The author argues for a research agenda focused on AI incidents to enable participation in AI systems through networked online behaviors, emphasizing the importance of understanding how incidents are constructed.
Abstract
The content discusses the significance of AI incidents, focusing on an example involving Twitter's cropping algorithm. It highlights the role of networked trouble in shaping interactions between individuals and algorithms, leading to changes in AI systems. The article proposes a research agenda to study how formats for troublemaking materialize in online environments, impacting public participation in AI development.
Stats
In September 2020, a Twitter user posted a tweet that sparked controversy around racist bias in Twitter's algorithm. The tweet went viral, leading to Twitter abandoning its cropping algorithm due to accusations of racial bias. Thousands of AI incidents have been documented, influencing technology companies to alter their AI systems. Meunier et al. (2021) propose thinking of AI incidents as 'algorithm trouble' that challenges perspectives on living with algorithms. Ahmed's theory of troublemaking is applied to analyze how actors interact with algorithms in deployment. Troublemaking involves pointing out invisible structures, achieving shared orientation, and problematizing harm caused by social interactions. Networked trouble relies on networking media and audiences, using formats for participation to coordinate attention across digital environments. Shared orientation was achieved through the discovery of a format for troublemaking that enabled interaction with the algorithm. Inscriptions produced during the incident framed the algorithm as racist, highlighting its harmful behavior and prompting action from technology companies.
Quotes
"AI incidents are not simply technical malfunctions but also involve troubling individuals and institutions." - Meunier et al. (2021) "Troublemaking forces social interaction into breakdown to frame it as harmful and in need of remedy." - Ahmed (2017) "Networked trouble relies on formats for participation that can spread across digital environments." - Shaffer Shane

Key Insights Distilled From

by Tommy Shaffe... at arxiv.org 03-14-2024

https://arxiv.org/pdf/2403.07879.pdf
AI incidents and 'networked trouble'

Deeper Inquiries

How can actors point out invisible features of algorithms beyond social media platforms?

Actors can point out invisible features of algorithms beyond social media platforms by engaging in various forms of troublemaking. One way is through the creation and dissemination of AI incidents, where examples of AI going wrong are highlighted to spark controversy and draw attention to underlying issues. These incidents can be documented in databases like the AI Incident Database, allowing for a systematic collection and analysis of algorithmic failures. Additionally, actors can collaborate with researchers and advocacy groups to conduct audits or assessments that reveal hidden biases or discriminatory patterns within algorithms. By participating in these activities, actors can bring attention to previously unnoticed aspects of algorithms that may have harmful implications.

What are the implications of framing harmful algorithms as participants in contesting their own design?

Framing harmful algorithms as participants in contesting their own design has significant implications for how we understand and address algorithmic bias and discrimination. By recognizing algorithms as active agents capable of producing harm, it shifts the focus from viewing them as neutral tools to acknowledging their role in perpetuating societal inequalities. This reframing opens up new possibilities for accountability and intervention, prompting stakeholders to engage more actively in addressing algorithmic flaws. It also highlights the need for collaborative efforts between developers, regulators, researchers, and affected communities to co-create solutions that mitigate bias and promote fairness in AI systems.

How does networked trouble impact public participation in shaping AI systems?

Networked trouble significantly impacts public participation in shaping AI systems by providing a platform for collective action and engagement with technology companies. Through networked environments like social media, individuals can coordinate efforts to highlight issues with AI systems, mobilize support around specific causes or incidents, and amplify their voices on a global scale. This form of participatory activism enables diverse perspectives to be heard, challenges existing power structures within tech companies, and influences decision-making processes related to AI development and deployment. Networked trouble creates opportunities for grassroots movements to hold technology firms accountable for their products' ethical implications while fostering transparency and dialogue between different stakeholders involved in shaping the future of artificial intelligence.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star