Graph Neural Networks (GNNs) have shown remarkable performance in analyzing graph-structured data. This paper introduces a novel approach, DPAR, that focuses on achieving node-level differential privacy for GNN training. By decoupling feature aggregation and message passing, DPAR enhances the privacy-utility trade-off compared to existing methods. The proposed algorithms outperform state-of-the-art techniques like GAP and SAGE in terms of test accuracy under the same privacy budget across various datasets. The study highlights the importance of balancing privacy protection for both node features and graph structures in GNN training.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Qiuchen Zhan... alle arxiv.org 03-15-2024
https://arxiv.org/pdf/2210.04442.pdfDomande più approfondite