Core Concepts
Existing robust aggregation methods in federated learning are vulnerable to various poisoning attacks, especially in cross-silo settings. FedRISE, a novel robust aggregator, leverages variance-reduced sparse gradients and a sign-based gradient valuation function to achieve improved robustness against these attacks.
Stats
FedRISE uses only two hyperparameters for aggregation: sparsification (γ) and server momentum (βra).
The authors experimented with 3 datasets: CIFAR10, FedISIC, and EuroSAT.
The experiments included 6 attack types: ALIE, IPM, Fang, Labelflip, Mimic, and Scale.
The study compared FedRISE with 8 existing robust aggregation methods.
In a cross-silo setting with CIFAR10, ResNet18, 5 clients, and 2 Byzantine clients, FedRISE achieved an F1-score of 0.72 against ALIE, 0.82 against IPM, and 0.78 against Fang.
FedRISE remained effective even with a high proportion of Byzantine clients (up to 48%) in the CIFAR10 IID-split experiment.
Quotes
"Existing robust aggregators collapse for at least some attacks under severe settings, while FedRISE demonstrates better robustness because of a stringent gradient inclusion formulation."
"Our experiments show that FedRISE is more resilient in handling attacks with varying objectives."
"FedRISE uses only two hyperparameters for aggregation (sparsification γ and server momentum βra) that are minimally dependent on client counts, training settings, and data distribution."