Core Concepts
Achieving node-level differential privacy for training GNNs with enhanced privacy-utility trade-off.
Abstract
Graph Neural Networks (GNNs) have shown remarkable performance in analyzing graph-structured data. This paper introduces a novel approach, DPAR, that focuses on achieving node-level differential privacy for GNN training. By decoupling feature aggregation and message passing, DPAR enhances the privacy-utility trade-off compared to existing methods. The proposed algorithms outperform state-of-the-art techniques like GAP and SAGE in terms of test accuracy under the same privacy budget across various datasets. The study highlights the importance of balancing privacy protection for both node features and graph structures in GNN training.
Stats
Privacy budget: 휖 = 1, Test accuracy: 0.3421 (Cora-ML)
Privacy budget: 휖 = 1, Test accuracy: 0.8569 (MS Academic)
Privacy budget: 휖 = 1, Test accuracy: 0.8927 (CS)
Privacy budget: 휖 = 1, Test accuracy: 0.934 (Reddit)
Privacy budget: 휖 = 1, Test accuracy: 0.8948 (Physics)
Quotes
"Our framework achieves enhanced privacy-utility trade-off compared to existing layer-wise perturbation based methods."
"We propose a Decoupled GNN with Differentially Private Approximate Personalized PageRank (DPAR) for training GNNs with an enhanced privacy-utility tradeoff."