This paper presents a taxonomy of 21 normative ethical principles that have been discussed in the AI and computer science literature. It examines how each principle has been previously operationalized, highlighting key themes that AI practitioners should be aware of when seeking to implement ethical principles in the reasoning capacities of responsible AI systems.
The authors first provide an overview of the paper categorization, classifying works based on the ethical principles explicitly mentioned, the type of contribution, and the evaluation method used. They then explore the taxonomy of ethical principles, including deontology, egalitarianism, proportionalism, Kantian ethics, virtue ethics, consequentialism, utilitarianism, maximin, envy-freeness, doctrine of double effect, and do no harm.
For each principle, the authors summarize its definition, previous applications, and potential difficulties in operationalization. They find that certain principles, such as utilitarianism, are more commonly discussed than others, and that there is a need for more precise specification of the ethical principles used.
The authors envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in the reasoning capacities of responsible AI systems, promoting ethical evaluation that considers social contexts and human values.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Jessica Wood... في arxiv.org 09-12-2024
https://arxiv.org/pdf/2208.12616.pdfاستفسارات أعمق