This survey provides a comprehensive overview of the research on Comprehensible Artificial Intelligence (CAI) on Knowledge Graphs (KGs). It starts by introducing the concepts of Interpretable Machine Learning (IML) and Explainable Artificial Intelligence (XAI), and defines CAI as the overarching term that encompasses both.
The survey then presents a taxonomy for CAI on KGs, covering the representation of KGs (symbolic, sub-symbolic, neuro-symbolic), the tasks (link prediction, node/graph clustering, recommendation), the foundational methods (translational learning, neural networks, rule-based learning), and the type of comprehensibility (IML, XAI).
Under the IML on KGs section, the survey discusses three main lines of research: rule mining methods, pathfinding methods, and embedding methods. These methods aim to create inherently interpretable AI models for tasks on KGs.
The XAI on KGs section covers four lines of research: rule-based learning methods, decomposition methods, surrogate methods, and graph generation methods. These methods focus on explaining the outputs of black-box AI models using the structure and semantics of KGs.
The survey provides a detailed overview of the key methods and approaches in each line of research, highlighting their strengths, limitations, and research challenges. It concludes by identifying future research directions in the field of CAI on KGs.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Simon Schram... at arxiv.org 04-05-2024
https://arxiv.org/pdf/2404.03499.pdfDeeper Inquiries