Graph Neural Networks (GNNs) have shown remarkable performance in analyzing graph-structured data. This paper introduces a novel approach, DPAR, that focuses on achieving node-level differential privacy for GNN training. By decoupling feature aggregation and message passing, DPAR enhances the privacy-utility trade-off compared to existing methods. The proposed algorithms outperform state-of-the-art techniques like GAP and SAGE in terms of test accuracy under the same privacy budget across various datasets. The study highlights the importance of balancing privacy protection for both node features and graph structures in GNN training.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Qiuchen Zhan... a las arxiv.org 03-15-2024
https://arxiv.org/pdf/2210.04442.pdfConsultas más profundas