Core Concepts
A novel gradient inversion attack based on Style Migration Network (GI-SMN) that can reconstruct original user data from gradient information without requiring powerful attackers or idealized prior knowledge.
Abstract
The paper proposes a novel gradient inversion attack called GI-SMN that can reconstruct original user data from gradient information in federated learning without requiring powerful attackers or idealized prior knowledge.
Key highlights:
GI-SMN overcomes the dependence on powerful attackers and idealized prior knowledge, making the attack more threatening.
GI-SMN can recreate original data with high similarity in batches using a style migration network and a series of regularization terms.
GI-SMN outperforms state-of-the-art gradient inversion attacks in visual effect and similarity metrics.
The paper demonstrates that gradient pruning and differential privacy are not effective defenses against privacy breaches in federated learning.
The authors first formulate the gradient inversion problem and provide an overview of GI-SMN's workflow. They then discuss the attacker's capabilities, the use of a pre-trained generative model (StyleGAN-XL), and the auxiliary regularization terms employed to enhance gradient matching.
Extensive experiments are conducted on CIFAR10, ImageNet, and FFHQ datasets. GI-SMN is compared against state-of-the-art gradient inversion attacks, demonstrating superior performance in terms of PSNR, SSIM, and LPIPS metrics. The impact of image size, batch size, loss functions, and different initialization methods on the reconstruction quality is also analyzed.
Furthermore, the paper evaluates the effectiveness of GI-SMN against gradient pruning and differential privacy defenses, showing that these techniques are not sufficient to prevent privacy breaches in federated learning.
Stats
The PSNR value of the reconstructed images can reach up to 36.98 on CIFAR10, 34.30 on ImageNet, and 31.31 on FFHQ.
GI-SMN outperforms state-of-the-art gradient inversion attacks by an average of 125% and 61% in PSNR value improvement.
Even when only 1% of the gradient information is retained due to gradient pruning, GI-SMN can still achieve a PSNR score of 20.05.
When the noise variance in the differential privacy defense is less than 1e-4, the effect on the gradient reconstruction is minimal, with a PSNR value of 21.88.
Quotes
"GI-SMN overcomes the dependence on powerful attackers and idealized prior knowledge, making the attack more threatening."
"GI-SMN can recreate original data with high similarity in batches using a style migration network and a series of regularization terms."
"GI-SMN outperforms state-of-the-art gradient inversion attacks in visual effect and similarity metrics."
"The paper demonstrates that gradient pruning and differential privacy are not effective defenses against privacy breaches in federated learning."