This content discusses a benchmarking study on perturbation-based explainability methods for Graph Neural Networks (GNNs). The study aims to evaluate and compare various explainability techniques, focusing on factual and counterfactual reasoning. Key findings include the identification of Pareto-optimal methods with superior efficacy and stability in the presence of noise. However, all algorithms face stability issues when dealing with noisy data. The study also highlights the limitations of current counterfactual explainers in providing feasible recourses due to violations of domain-specific constraints.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Mert Kosan,S... at arxiv.org 03-15-2024
https://arxiv.org/pdf/2310.01794.pdfDeeper Inquiries