The paper introduces a novel defense framework to address the threat of data poisoning attacks in Federated Learning (FL) environments. The key idea is to leverage the training loss reported by each participating user, combined with Differential Privacy techniques, to detect and eliminate malicious users from the aggregation process.
The authors first conduct experiments to analyze the impact of data poisoning attacks on FL models, using the MNIST and CIFAR-10 datasets. They observe that while standard metrics like accuracy and loss do not clearly indicate the presence of malicious users, the recall of the target class being attacked is significantly impacted.
Building on these insights, the authors propose a defense mechanism that has the following steps:
During the local training phase, each user adds random noise to their reported training loss using the Laplace mechanism of Local Differential Privacy. This preserves user privacy while allowing the server to detect anomalies.
In the global aggregation phase, the server applies various algorithms (threshold-based, distance-based, Z-score, and K-means clustering) to identify and eliminate users whose training losses deviate significantly from the norm.
The authors extensively evaluate the proposed defense, with a focus on the K-means clustering approach. The results show that the defense is able to maintain model performance (accuracy and source class recall) even with up to 40% malicious users, while accurately identifying the majority of attackers. The F1 score for attacker detection remains high, demonstrating the effectiveness of the approach in balancing security and utility.
The authors conclude that their novel user elimination strategy, combined with differential privacy techniques, provides a robust defense against data poisoning attacks in Federated Learning, contributing to the safe adoption of FL in sensitive domains.
Naar een andere taal
vanuit de broninhoud
arxiv.org
Belangrijkste Inzichten Gedestilleerd Uit
by Nick Galanis om arxiv.org 04-22-2024
https://arxiv.org/pdf/2404.12778.pdfDiepere vragen