Core Concepts
The authors propose a novel hybrid Byzantine attack that combines a sparse yet aggressive attack targeting sensitive neural network weights with a stealthy but accumulating attack, which together form a strong yet imperceptible attack against various defense mechanisms in federated learning.
Abstract
The authors argue that existing Byzantine attacks often focus on being either aggressive or imperceptible, but not both. To address this, they propose a novel hybrid Byzantine attack that combines two components:
A sparse yet aggressive attack: This part of the attack targets only certain sensitive weights in the neural network with higher perturbations, aiming to bypass defenses that rely on index-wise outlier detection.
A stealthy but accumulating attack: This part of the attack applies smaller perturbations across many weights, accumulating over time to undermine the model's performance, while remaining imperceptible to defenses that rely on geometric distance-based outlier detection.
The authors leverage insights from neural network pruning to identify the sensitive weights to target with the aggressive part of the attack. They show through extensive simulations that this hybrid approach is effective against a wide range of defense mechanisms, reducing test accuracy by up to 60% in IID settings and completely diverging the model in non-IID settings.
Stats
The authors do not provide any specific numerical data or metrics in the content.
Quotes
The authors do not provide any direct quotes in the content.