toplogo
Kirjaudu sisään
näkemys - AI Ethics - # Legitimacy and power dynamics in AI ethics work

Epistemic Hierarchies and the Marginalization of Embodied Experiences in AI Ethics Labor


Keskeiset käsitteet
Dominant AI ethics practices that prioritize quantification and objectivity risk delegitimizing and marginalizing ethics work grounded in embodied, situated experiences, especially for women and other minoritized groups.
Tiivistelmä

The article examines the epistemic power dynamics at play in AI ethics labor. It shows how AI ethics work is often seen as a lower-status "chore" compared to the "real work" of building AI systems. Some seek to legitimize ethics work by casting it as objective and quantitative, automating processes like model cards. However, attempts by participants to raise ethics concerns from their own situated, embodied perspectives are often delegitimized.

The author draws on feminist STS, postcolonial, and Black feminist theory to analyze these dynamics. They demonstrate how dominant AI ethics practices that prioritize quantification and objectivity risk further entrenching the epistemic power of these approaches, while marginalizing ethics work grounded in lived experiences.

In response, the author proposes "humble technical practices" - quantitative or technical practices that explicitly acknowledge their epistemic limits and make space for other ways of knowing. This involves platforming marginalized voices, recognizing the validity of lived experience, and resisting the tendency to segment ethics work into its own isolated domain. The goal is to level epistemic power differentials and enable a more pluralistic, inclusive approach to AI ethics.

edit_icon

Mukauta tiivistelmää

edit_icon

Kirjoita tekoälyn avulla

edit_icon

Luo viitteet

translate_icon

Käännä lähde

visual_icon

Luo miellekartta

visit_icon

Siirry lähteeseen

Tilastot
"A lot of filling out the model card is subjective, right, because it's based on my understanding of how we use the model and my understanding of the data, the data set." "we try to automate a lot of that now" by building "tooling to run the data and the model [...] that's actually, objectively looking at it." "The de-bug-ability of it so that at least you can fix the problem" is most pressing. "fairness," based on "a lot of definitions that we have around nondiscrimination ... policy" are first order concerns.
Lainaukset
"I don't have white skin and the device did and I was like: Wait, if I'm developing it, if people who are using it, [they] might feel even more intensely about it." "it was easier to sort of bring up those diverse opinions" by virtue of her identity, but she was concerned others would ask whether her "perspective [is] coming because I am a unique person [...] or is it only coming because I'm a woman and and then a woman of color?" "feeling like it's [...] a part of your lived experience to be harmed by technologies, versus feeling like it's an abstract intellectual conversation." "the people who signed up for discussion on [the article] 'Google will know I'm pregnant before I [do]' were men. And they were talking about periods and being pregnant and all these things without like, any personal experience."

Tärkeimmät oivallukset

by David Gray W... klo arxiv.org 04-11-2024

https://arxiv.org/pdf/2402.08171.pdf
Epistemic Power, Objectivity and Gender in AI Ethics Labor

Syvällisempiä Kysymyksiä

How can we create spaces and structures that actively elevate and center the knowledge and perspectives of those with direct lived experience of the harms caused by AI systems?

In order to create spaces that elevate and center the knowledge and perspectives of those directly impacted by the harms of AI systems, several key strategies can be implemented: Diverse Representation: Ensure that decision-making bodies, research teams, and advisory groups in AI development include individuals with diverse backgrounds, especially those who have direct lived experience of the harms caused by AI systems. This can help bring unique perspectives to the table and inform more ethical and inclusive AI practices. Community Engagement: Foster meaningful engagement with communities affected by AI technologies. This can involve conducting community consultations, participatory design processes, and co-creation workshops to understand their needs, concerns, and priorities. Empowerment and Ownership: Empower individuals with lived experience to take ownership of the narratives and discussions around AI ethics. Provide platforms for them to share their stories, insights, and recommendations, ensuring that their voices are heard and valued in decision-making processes. Training and Capacity Building: Offer training programs and capacity-building initiatives to equip individuals with the skills and knowledge to engage meaningfully in AI ethics discussions. This can help bridge the gap between technical experts and community members, fostering more inclusive dialogues. Cultural Sensitivity and Humility: Approach interactions with humility, recognizing the limitations of one's own perspective and expertise. Cultivate a culture of respect, empathy, and openness to different ways of knowing and understanding the impacts of AI systems.

How might we need to rethink the very foundations of AI and computer science education to challenge the dominance of quantitative, "objective" approaches and make room for other ways of knowing?

Rethinking the foundations of AI and computer science education to challenge the dominance of quantitative, "objective" approaches involves a paradigm shift towards more inclusive and diverse pedagogical practices. Here are some key considerations: Interdisciplinary Curriculum: Integrate perspectives from diverse disciplines such as feminist science and technology studies, postcolonial theory, and Black feminist thought into AI and computer science education. This can help students understand the social, cultural, and ethical implications of technology beyond quantitative metrics. Critical Thinking and Reflection: Emphasize critical thinking skills and encourage students to reflect on the societal impacts of AI systems. Encourage them to question dominant narratives, challenge biases, and consider alternative ways of knowing and understanding technology. Ethical Frameworks: Incorporate ethical frameworks and discussions on power dynamics, privilege, and marginalization into the curriculum. Encourage students to consider the ethical implications of their work and the importance of centering diverse voices in AI development. Experiential Learning: Provide opportunities for experiential learning, such as community-engaged projects, internships, and research collaborations that expose students to real-world contexts and diverse perspectives. This can help bridge the gap between theory and practice, fostering a more holistic understanding of AI ethics. Cultural Competency Training: Offer training on cultural competency, diversity, and inclusion to help students navigate complex social issues in AI development. Equip them with the skills to engage respectfully and collaboratively with individuals from diverse backgrounds.

What can we learn from other disciplines and fields, such as feminist science and technology studies, postcolonial theory, and Black feminist thought, that could help transform the epistemic foundations of AI ethics in more emancipatory directions?

Drawing insights from feminist science and technology studies, postcolonial theory, and Black feminist thought can offer valuable perspectives to transform the epistemic foundations of AI ethics towards more emancipatory directions: Intersectionality: Embrace the concept of intersectionality to understand how multiple forms of oppression and privilege intersect in AI systems. Recognize the interconnected nature of race, gender, class, and other social identities in shaping experiences of harm and discrimination. Situated Knowledge: Acknowledge the importance of situated knowledge, as highlighted by feminist scholars, to center the experiences and perspectives of marginalized communities in AI ethics discourse. This can help challenge dominant narratives and amplify voices that have been historically marginalized. Critical Reflection: Encourage critical reflection on power dynamics, colonial legacies, and structural inequalities in AI development. By critically examining the historical and social contexts in which AI systems operate, we can work towards more ethical and socially just technological practices. Ethics of Care: Emphasize an ethics of care approach in AI ethics, inspired by feminist ethics, that prioritizes empathy, compassion, and relationality in technology design and decision-making. This can help foster more human-centered and inclusive AI systems. Decolonial Perspectives: Incorporate decolonial perspectives to challenge Eurocentric biases and colonial legacies in AI research and practice. By centering indigenous knowledge systems, diverse worldviews, and anti-colonial struggles, we can work towards decolonizing AI and promoting epistemic justice. By integrating these insights and approaches from feminist, postcolonial, and Black feminist scholarship, we can broaden the epistemic foundations of AI ethics, promote diversity and inclusion in technology development, and strive towards more emancipatory and socially responsible AI practices.
0
star