מושגי ליבה
Novel approach using Huber loss minimization for robust federated learning.
תקציר
The content introduces a novel approach to Byzantine robust federated learning using Huber loss minimization. It discusses the challenges faced by federated learning systems, the importance of defense strategies against Byzantine attacks, and compares various existing methods. The proposed method aggregates gradients by minimizing a multi-dimensional Huber loss, providing theoretical analysis and implementation details. Experiments on synthesized and real data demonstrate the effectiveness of the new approach under different attack strategies and data distributions.
Introduction
Discusses the rise of Federated Learning (FL) due to privacy concerns.
Highlights challenges faced by FL, particularly in terms of robustness against adversarial attacks.
Existing Methods
Mentions various gradient aggregators like Krum, geometric median-of-mean, coordinate-wise median, and coordinate-wise trimmed mean.
Critically evaluates their performance under different scenarios.
Proposed Method
Introduces a novel approach based on Huber loss minimization for robust federated learning.
Provides theoretical analysis under i.i.d, unbalanced, and heterogeneous data assumptions.
Implementation
Describes the algorithm for implementing multi-dimensional Huber loss minimization.
Numerical Experiments
Conducts experiments on synthesized and real data to validate the effectiveness of the proposed method against various attack strategies.
Conclusion
Concludes with future directions for improving robustness in federated learning.
סטטיסטיקה
"Our method still exhibits desirable performance, even under HLMA designed specifically for ourselves."
"Krum is still highly susceptible to KA."
"CWM appears to be only slightly worse than our method."
ציטוטים
"Our method still exhibits desirable performance, even under HLMA designed specifically for ourselves."
"Krum is still highly susceptible to KA."
"CWM appears to be only slightly worse than our method."