Exact batch reconstruction is possible in the honest-but-curious setting for federated learning.
Exact batch reconstruction is possible in federated learning, challenging prior assumptions and highlighting privacy risks.
Gradient inversion attacks can accurately recover private training data from shared gradients in Federated Learning, but existing methods heavily rely on impractical assumptions to access excessive auxiliary data. This study proposes a novel method, Gradient Inversion using Practical Image Prior (GI-PIP), that significantly alleviates the auxiliary data requirement on both amount and distribution, posing a greater threat to real-world Federated Learning.
The core message of this work is to propose the first analytical algorithm that can accurately recover augmented labels, such as label smoothing and mixup, as well as the last-layer input features from gradients in gradient inversion attacks, without being limited by the existence of bias terms in the network.
Even access to gradients from a small fraction of a Transformer model's parameters, such as a single layer or even a single linear component, can lead to the reconstruction of private training data, making distributed learning systems more vulnerable than previously thought.
본 논문에서는 시공간 데이터를 사용한 연합 학습에서 그래디언트를 통해 사용자 정보를 재구성하는 공격 (ST-GIA)과 이를 개선한 ST-GIA+ 공격을 제안하고, 이러한 공격으로부터 사용자 프라이버시를 보호하기 위한 적응형 방어 전략을 제시합니다.