Core Concepts
BOBA provides unbiased and robust aggregation in federated learning, addressing label skewness challenges.
Abstract
The content discusses the challenges of label skewness in federated learning and introduces BOBA as a solution. It covers the theoretical analysis, algorithm stages, computational complexity, and experimental evaluations on various datasets and attacks.
Introduction
Federated learning (FL) system overview.
Challenges of Byzantine attacks in FL.
Label Skewness Analysis
Definition of label skew distribution.
Honest gradients distribution analysis.
Challenges of Label Skewness
Selection bias and increased vulnerability explained.
Proposed BOBA Algorithm
Two-stage method: fitting honest subspace and finding honest simplex.
Theoretical Analysis
Connection between convergence and gradient estimation error.
Experiments
Evaluation of unbiasedness, robustness, efficiency, effect of server data, hyper-parameters, label skewness settings.
Evaluation Results
Unbiasedness evaluation shows BOBA's superior performance compared to baseline AGRs.
Robustness evaluation demonstrates BOBA's effectiveness against various attacks.
Further Questions
Stats
In this paper, we address label skewness in federated learning.
We propose an efficient two-stage method named BOBA with proven convergence guarantees.
Quotes
"We introduce BOBA to tackle the limitations of existing AGRs."
"BOBA demonstrates superior unbiasedness and robustness across diverse models."