Graph Neural Networks (GNNs) have shown remarkable performance in analyzing graph-structured data. This paper introduces a novel approach, DPAR, that focuses on achieving node-level differential privacy for GNN training. By decoupling feature aggregation and message passing, DPAR enhances the privacy-utility trade-off compared to existing methods. The proposed algorithms outperform state-of-the-art techniques like GAP and SAGE in terms of test accuracy under the same privacy budget across various datasets. The study highlights the importance of balancing privacy protection for both node features and graph structures in GNN training.
Para outro idioma
do conteúdo fonte
arxiv.org
Principais Insights Extraídos De
by Qiuchen Zhan... às arxiv.org 03-15-2024
https://arxiv.org/pdf/2210.04442.pdfPerguntas Mais Profundas