Основні поняття
The author aims to ensure strong privacy guarantees for Federated Learning under Data Reconstruction Attacks by constraining the transmitted information through innovative channel models and data space operations.
Анотація
The content discusses defending against Data Reconstruction Attacks (DRA) in Federated Learning (FL) using an Information Theory approach. It introduces a channel model to quantify information leakage, proposes methods to constrain information transmission, and validates the effectiveness of these techniques through experiments with real-world datasets.
The paper addresses challenges faced in FL due to DRA attacks, focusing on enhancing privacy protection. It establishes theoretical frameworks based on mutual information to evaluate privacy leakage and designs methods to limit information leakage effectively. By transforming operations from the parameter space to the data space, the study significantly improves training efficiency and model accuracy under constrained information leakage.
Key observations include analyzing mutual information accumulation and developing controlled parameter channels to restrict transmitted information within specified thresholds. The implementation methods aim at balancing utility and privacy while enhancing safety, efficiency, and flexibility in FL.
Статистика
I(DDD; eW(t)o |W(t)i ) ≤ f (t)(σ)
C(t) = d/2 * ln(λ(t)+σ / σ)
Цитати
"We demonstrate that the amount of transmitted information decides the lower bound of the reconstruction error for DRA attacks."
"Our protecting goal is to decide the covariance matrix for added noise according to a given data distribution."