Core Concepts
Effective detection of backdoor attacks on Graph Neural Networks is achieved through novel metrics derived from GNN explainers.
Abstract
The content discusses the vulnerability of Graph Neural Networks (GNNs) to backdoor attacks and the challenges in detecting them. It proposes a novel detection strategy using seven new metrics to enhance backdoor detection effectiveness. The method is evaluated on various datasets and attack models, showcasing significant advancements in safeguarding GNNs against backdoor attacks.
Directory:
- Introduction
- GNNs' significance in graph data learning.
- Backdoor attacks' threat to GNNs.
- Limitations of GNN Explainers for Backdoor Detection
- Inconsistencies in using GNN explainers for backdoor detection.
- Proposal for a multi-faceted approach using novel metrics.
- Proposed Metrics
- Metrics leveraging different aspects of the explanation process.
- Explanation of Prediction Confidence, Explainability, Connectivity, SNDV, NDV, Elbow, and Curvature.
- Detection Strategy
- Clean validation thresholding for backdoor detection.
- Composite metric for backdoor prediction.
- Experiments
- Impact of NPMR on F1 score.
- Effectiveness against adaptive attacks.
- Conclusion
- Summary of the research findings and proposed detection strategy.
Stats
Our method has shown an F1 score of up to 0.906 for detection of randomly-generated triggers and 0.842 for adaptively-generated triggers.
The proposed adaptive attack aims to evade GNN explainers and detection methods.
Quotes
"Our method can achieve high detection performance, marking a significant advancement in safeguarding GNNs against backdoor attacks."
"The composite metric still performs reasonably well in the adaptive case, suggesting that our detection method is robust against attacks on individual metrics."