Core Concepts
Proposing an invariant aggregator to defend against backdoor attacks in federated learning by redirecting aggregated updates to invariant directions.
Abstract
Federated learning allows training models without sharing private data directly.
Backdoor attacks in federated learning can control model predictions using triggers.
Existing defenses may fail over flat loss landscapes.
Proposed invariant aggregator redirects updates to invariant directions to mitigate backdoor attacks.
The approach combines AND-mask and trimmed-mean estimator for defense.
Theoretical and empirical results show the effectiveness of the defense.
Stats
"Our approach decreases the ASR by 61.6% on average."
"On average, our approach decreases the backdoor attack success rate by 61.6%."
Quotes
"Our approach decreases the ASR by 61.6% on average."