Core Concepts
The author aims to define and explore the concept of responsible artificial intelligence, emphasizing the urgent need for international regulation and frameworks to guide AI development. The research provides a comprehensive definition of responsible AI and advocates for a human-centric approach.
Abstract
The content delves into the importance of responsible artificial intelligence within EU policy discussions, highlighting the dual nature of AI as both beneficial and potentially harmful. It emphasizes the necessity for international regulation, frameworks guiding AI development, and a human-centric approach focusing on ethics, model explainability, privacy, security, and trust. The structured literature review analyzes definitions of responsible AI, explores content-wise similar expressions like ethical AI and trustworthy AI, discusses research methodology, data extraction techniques, key metrics supporting arguments in favor of responsible AI implementation. Various papers are reviewed regarding Trustworthy AI, Ethical AI, Explainable AI with insights on reviews/surveys conducted in healthcare applications. The analysis covers XAI techniques like black-box models problem solutions, synonyms for XAI terms like interpretability/explainability/intelligibility distinctions. It also includes motivations for XAI adoption in various sectors like medicine/healthcare along with stakeholder considerations and evaluation methods.
Stats
According to [13] ”Formal verification is a way to provide provable guarantees and thus increase one’s trust that the system will behave as desired.”
In [31], Privacy can be seen as a central aspect as well as human agency because people who felt they had more control over their own online information were more likely to view automated decision-making (ADM) through AI as fair.
[48] suggests building blocks into ”What to explain” (content type), ”How to explain” (communication), and ”to Whom is the explanation addressed” (target group).
Quotes
"Trustworthy AI is about delivering the promise of AI’s benefits while addressing scenarios vital consequences for people and society." - [13]
"Explainability should be considered as a bridge to avoid unfair or unethical use of algorithm outputs." - [6]