This content discusses a benchmarking study on perturbation-based explainability methods for Graph Neural Networks (GNNs). The study aims to evaluate and compare various explainability techniques, focusing on factual and counterfactual reasoning. Key findings include the identification of Pareto-optimal methods with superior efficacy and stability in the presence of noise. However, all algorithms face stability issues when dealing with noisy data. The study also highlights the limitations of current counterfactual explainers in providing feasible recourses due to violations of domain-specific constraints.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Mert Kosan,S... klo arxiv.org 03-15-2024
https://arxiv.org/pdf/2310.01794.pdfSyvällisempiä Kysymyksiä