The article discusses the benefits of combining federated learning (FL) and differential privacy (DP) to enable privacy-preserving large-scale machine learning.
FL allows multiple remote clients to collaboratively train a machine learning model without exposing their raw data, by sharing only model updates. However, FL alone does not provide formal privacy guarantees, as the model updates can still leak private information. DP, on the other hand, provides a rigorous mathematical framework to limit the privacy leakage from the model outputs.
The article first introduces the key concepts of FL and DP, and highlights how their combination can address the conflict between data-hungry machine learning and growing privacy concerns. It then reviews the current research advances in integrating DP into FL, categorizing the different paradigms and notions, such as centralized DP, local DP, and distributed DP.
To achieve usable FL with DP, the article presents high-level optimization principles from the perspectives of DP and FL. DP-focused optimizations include improving gradient clipping, noise distribution, and privacy loss composition. FL-focused optimizations leverage the characteristics of massive FL clients and sparse model parameters, such as reducing update frequency, compressing model parameters, and sampling participating clients.
Finally, the article discusses future challenges in applying FL with DP to emerging areas like vertical/transfer federation, large language models, and streaming data, as well as considerations around robustness, fairness, and the "right to be forgotten".
翻译成其他语言
从原文生成
arxiv.org
更深入的查询