Core Concepts
A novel private federated learning framework on Riemannian manifolds (PriRFed) is proposed, which ensures differential privacy and provides convergence guarantees.
Abstract
The paper proposes a private federated learning (FL) framework on Riemannian manifolds, named PriRFed, which aims to safeguard data privacy while solving distributed optimization problems on non-Euclidean spaces.
Key highlights:
PriRFed integrates differential privacy (DP) techniques into the FL setting on Riemannian manifolds, providing privacy guarantees for the learned model.
The framework involves two stages: private local training by agents and server-side aggregation using Riemannian optimization techniques.
For the local training, the authors analyze the convergence of PriRFed using two DP-based Riemannian optimization algorithms: DP-RSGD and DP-RSVRG.
Theoretical analysis shows that PriRFed satisfies (ε, δ)-DP, and the privacy budget does not degrade significantly with the number of iterations.
Numerical experiments on synthetic and real-world datasets demonstrate the efficacy of the proposed PriRFed approach.
Stats
The paper does not provide specific numerical data or statistics. It focuses on the theoretical analysis of the proposed private federated learning framework.
Quotes
The paper does not contain any striking quotes that support the key arguments.