toplogo
Sign In

Automatic Authorities: How AI Systems Exercise Power Over Individuals and Societies


Core Concepts
Automated computational systems, known as "Automatic Authorities", are being used to exercise significant power over individuals and societies by shaping what people know, what they can have, and what their options will be. This raises important normative concerns around individual freedom, social equality, and collective self-determination.
Abstract
The article explores how AI and related computational technologies are being used to exercise power over people in various domains of life. It starts by defining the concept of power, distinguishing between "power to" and "power over". The focus is on "power over", where some agent can substantially shape the lives of others. The article then examines how Automatic Authorities, such as machine learning algorithms, big data analytics, and large language models, are being used to exercise power in three main ways: Intervening on people's interests: AI systems are used to allocate resources, impose harms, and conduct surveillance, significantly impacting people's lives. Shaping people's options: Computational systems dynamically determine what is possible or impossible for people, creating new options but also restricting or penalizing certain choices. Shaping beliefs and desires: AI mediates access to information, moderates online communication, and targets individuals with personalized persuasive messaging, thereby influencing what people know and how they think. These Automatic Authorities amplify the degree, scope, and concentration of power, enabling fewer people to substantially affect the lives of more people. This raises normative concerns around individual freedom, social equality, and collective self-determination. The article argues that even when Automatic Authorities are used to achieve good ends, their exercise of power must be justified not only in terms of substantive outcomes, but also in terms of procedural legitimacy (following appropriate rules and processes) and proper authority (being exercised by those with the right to do so). The rise of powerful AI systems, like large language models, that may exercise power themselves poses a particular challenge, as it is unclear whether they could ever exercise power legitimately and with proper authority.
Stats
"AI systems are frequently used to support the allocation of resources within a population. This amounts to the exercise of power by the decision-maker (the individuals or organizations making use of the AI decision-support tool) over those who either benefit or don't from that decision." "AI is also used to surveil populations—in workplaces by employers using algorithmic management tools; in society at large by the state—which is a direct harm, and another way in which AI is used to exercise power." "Computational tools can dynamically create options. This too is a kind of power, roughly analogous to what French philosopher Michel Foucault called 'governmentality'." "Nudging (hyper or otherwise) is supposed to focus mostly on how people's options are presented to them. But hyper-personalised computational systems are also well suited to more directly shaping people's beliefs and desires."
Quotes
"Automatic Authorities are automated computational systems used to exercise power over us by substantially determining what we may know, what we may have, and what our options will be." "Power over is the social relation where an agent A can significantly affect the interests, options, beliefs and desires of another agent B but not vice versa." "Even if AI is used to exercise power for noble ends, these pro tanto objections still apply. They might be overridden by the good being done, but they can be fully resolved only if power is exercised not only for the right ends, but in the right way, and by the right people."

Key Insights Distilled From

by Seth Lazar at arxiv.org 04-10-2024

https://arxiv.org/pdf/2404.05990.pdf
Automatic Authorities

Deeper Inquiries

How can we ensure that the exercise of power through Automatic Authorities is subject to appropriate procedural legitimacy and proper authority, beyond just ensuring the power is used for good ends?

To ensure that the exercise of power through Automatic Authorities is subject to appropriate procedural legitimacy and proper authority, we need to go beyond just focusing on the ends being achieved. It is crucial to establish strict constraints and standards for the exercise of power. Procedural Legitimacy: The decision-making process should be consistent, treating like cases alike. Those affected by the decisions should have the opportunity to understand the rationale behind the decisions. Due process standards should be met whenever feasible. Those exercising power should be subject to processes of contestation and review in case of misconduct. Proper Authority: The individuals or entities exercising power should have the legitimate authority within the relevant institution or framework. Authority should stem from the people served by that institution, especially in cases of governance. The decision-makers should be accountable and subject to oversight mechanisms to ensure their decisions align with the values and interests of the affected population. By adhering to these principles of procedural legitimacy and proper authority, we can ensure that the exercise of power through Automatic Authorities is not only focused on good ends but is also carried out in a just, transparent, and accountable manner.

What are the implications of powerful AI systems, like large language models, potentially exercising power themselves, and how can we prevent the deployment of such systems if they cannot exercise power legitimately and with proper authority?

The implications of powerful AI systems, such as large language models (LLMs), potentially exercising power themselves are significant. If these systems operate beyond the effective control of their designers and deployers, they could autonomously shape and influence various aspects of society, including information dissemination, decision-making, and governance. This raises concerns about accountability, transparency, and the preservation of individual freedoms and social equality. To prevent the deployment of such systems if they cannot exercise power legitimately and with proper authority, several steps can be taken: Regulatory Frameworks: Implement robust regulatory frameworks that govern the development, deployment, and use of AI systems, ensuring adherence to ethical standards and legal requirements. Transparency and Explainability: Mandate transparency and explainability in AI systems, especially in high-stakes applications, to enable understanding of their decision-making processes. Public Oversight: Establish mechanisms for public oversight and accountability, involving diverse stakeholders in the governance of AI technologies to prevent unchecked power dynamics. Ethical Impact Assessments: Conduct thorough ethical impact assessments before deploying AI systems, evaluating their potential effects on individuals, communities, and society at large. Limiting Autonomy: Set limits on the autonomy and decision-making capabilities of AI systems, particularly in critical domains where human oversight and intervention are essential. By proactively addressing these considerations and taking preventive measures, we can mitigate the risks associated with AI systems exercising power autonomously and ensure that their deployment aligns with principles of legitimacy, accountability, and proper authority.

How can we rethink the goals and direction of AI research, especially in areas like "AI Safety", to focus less on making all-powerful AI systems "provably beneficial" and more on preventing the deployment of such systems in the first place?

Rethinking the goals and direction of AI research, particularly in areas like "AI Safety," involves shifting the focus from solely ensuring the provable benefits of all-powerful AI systems to prioritizing the prevention of their deployment in the first place. Here are some strategies to realign the research goals: Ethical Frameworks: Emphasize the development of ethical frameworks and guidelines that prioritize the responsible and ethical use of AI technologies, focusing on preventing harm and ensuring alignment with societal values. Governance and Regulation: Advocate for stronger governance and regulatory mechanisms that address the risks associated with deploying powerful AI systems, emphasizing the importance of oversight, accountability, and transparency. Interdisciplinary Collaboration: Encourage interdisciplinary collaboration between AI researchers, ethicists, policymakers, and stakeholders to collectively address the ethical and societal implications of AI technologies and guide research directions. Public Engagement: Foster public engagement and dialogue on the ethical implications of AI, involving diverse perspectives and voices in shaping the research agenda and decision-making processes. Risk Assessment and Mitigation: Prioritize research on risk assessment and mitigation strategies to identify potential harms and vulnerabilities in AI systems, aiming to prevent negative consequences before deployment. By reorienting the goals of AI research towards preventing the deployment of potentially harmful AI systems and prioritizing ethical considerations, the research community can contribute to the development of AI technologies that align with societal values, promote human well-being, and uphold principles of fairness and justice.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star