Core Concepts
The author aims to enhance the robustness of federated learning models against adversarial attacks and non-IID challenges by introducing a novel logits calibration strategy under the federated adversarial training framework. This approach improves model performance by addressing class imbalances and biases between local and global models.
Abstract
The content discusses the vulnerability of federated learning (FL) to adversarial examples (AEs) and non-independent and identically distributed (non-IID) data challenges. The proposed method, FedALC, combines adversarial training (AT) with logits calibration to improve model robustness. By adjusting logit weights based on class frequencies, the approach aims to mitigate biases in model training due to class imbalances. Experimental results on MNIST, Fashion-MNIST, and CIFAR-10 datasets demonstrate competitive natural and robust accuracy compared to baselines. The article outlines the standard FL process, introduces FAT as a defense mechanism against AEs, and details the calibrated local adversarial training phase. Implementation details, hyperparameters, metrics, performance comparisons across datasets, communication efficiency evaluations, and future work directions are discussed.
Stats
"Experimental results show that our proposal achieves significant performance gains in both natural accuracy and robust accuracy."
"FedALC outperforms other baselines in most cases for both natural test accuracy and robust test accuracy."
"Under FGSM attack, our approach still has an advantage over other baselines."
"FedALC exhibits superior communication efficiency compared to other baselines."
"Our proposal significantly surpasses the baselines as the number of iterations increases."