핵심 개념
The authors propose two novel privacy-preserving federated primal-dual learning algorithms, DP-FedPDM and BSDP-FedPDM, to efficiently solve non-convex and non-smooth federated learning problems while considering communication efficiency and privacy protection.
초록
The paper presents two novel privacy-preserving federated primal-dual learning algorithms:
-
DP-FedPDM: A fundamental federated primal-dual algorithm that applies differential privacy to protect client privacy.
- Comprehensive privacy and convergence analyses are provided, showing DP-FedPDM can achieve a ζ-stationary solution in O(1/ζ) communication rounds, which is the lowest known for non-convex problems.
-
BSDP-FedPDM: An extension of DP-FedPDM that incorporates bidirectional model sparsification to further reduce communication cost.
- The algorithm applies top-k and rand-k sparsifiers in both uplink and downlink communications.
- Several unique properties of BSDP-FedPDM are discussed, including mitigating the "curse of primal averaging" and the tradeoff between communication cost reduction and performance.
Extensive experiments on real-world datasets (Adult and MNIST) are conducted to validate the effectiveness of the proposed algorithms and the analytical results. The results demonstrate the superior performance of the proposed algorithms over state-of-the-art federated learning algorithms in terms of communication efficiency, privacy protection, and learning performance.
통계
The paper does not provide any specific numerical data or statistics to support the key arguments. The experimental results are presented in the form of plots showing the testing accuracy and communication costs.