מושגי ליבה
AI ethics must be addressed as a supply chain problem, considering the political economy and intra-firm relations that structure AI production, particularly by examining opportunities for intervention upstream.
תקציר
The article argues that policy interventions for AI ethics must consider AI as a supply chain problem, given how the political economy and intra-firm relations structure AI production. It highlights that much like physical goods, software is assembled from components developed by many people across diverse contexts, forming an "AI supply chain."
The authors note that current AI ethics approaches often focus on the component being developed or its downstream effects, rather than its upstream supply chain. They suggest that conceiving of AI ethics as a supply chain problem and looking up the chain can surface "values levers" - practices that can open up discussions about values and ethics - presenting opportunities for policy, design, and activism.
The article explores several ways of "acting upstream" in the AI supply chain:
Applying human rights law to the working conditions of upstream AI data workers, such as low-paid annotators.
Market-based policy interventions, such as disclosures, procurement, and "choosy" customers, which can create pressure to address ethical issues.
Design and activist practices that help stakeholders understand, question, and advocate for changes upstream in the AI supply chain.
Ethical licensing, which recognizes the harms from making powerful AI freely available and requires downstream users to consider their upstream dependencies and ethical commitments.
The authors conclude that these upstream approaches present future opportunities for design and policy interventions to address AI ethics.
ציטוטים
"Thinking about ethics and responsibility as chains of relations surfaces specific locations in which ethical decision-making can take place."
"Ethical design interventions for AI often think downstream, often drawing on design futuring, scenarios, or value sensitive design techniques to consider how stakeholder harms might occur during the deployment and use of AI systems. While useful, we argue that there are unexplored opportunities for acting upstream."
"Policy interventions focused on making producers of AI systems disclose information about their upstream practices may create market pressures to address ethical issues."