核心概念
The CorBin-FL and AugCorBin-FL mechanisms achieve differential privacy guarantees in federated learning by using correlated binary stochastic quantization of local model updates.
要約
The content introduces two novel privacy mechanisms for federated learning:
- CorBin-FL:
- Uses correlated binary stochastic quantization to achieve parameter-level local differential privacy (PLDP).
- Clients share a limited amount of common randomness to perform the correlated quantization without compromising individual privacy.
- Provides theoretical analysis showing CorBin-FL asymptotically optimizes the privacy-utility tradeoff between mean squared error and PLDP.
- AugCorBin-FL:
- An extension of CorBin-FL that, in addition to PLDP, also achieves user-level and sample-level central differential privacy.
- A hybrid mechanism where a fraction of clients use CorBin-FL and the rest use the LDP-FL mechanism.
- Provides bounds on the privacy parameters and mean squared error performance.
The proposed mechanisms are shown to outperform existing differentially private federated learning approaches, including the Gaussian, Laplacian, and LDP-FL mechanisms, in terms of model accuracy under equal PLDP privacy budgets. The mechanisms are also robust to client dropouts and scale well with the number of clients.
統計
The mean square error of the CorBin-FL mechanism is bounded by:
r^2/(2n)((sqrt(2)-1)α(ϵ_p)+1)/((sqrt(2)+1)α(ϵ_p)-1)
The mean square error of the AugCorBin-FL mechanism is bounded by:
γr^2α^2(ϵ_p)/n + (1-γ)2r^2/(nθ)((sqrt(2)-1)α(ϵ_p)+1)/((sqrt(2)+1)α(ϵ_p)-1)
引用
"The CorBin-FL mechanism is unbiased, i.e., E(W_g) = 1/n Σ_i∈[n] w_i, where w_i, i ∈[n] are the local client updates, and W_g is the average of the obfuscated updates at the server."
"The AugCorBin-FL mechanism achieves (ϵ_u, δ)-UCDP and ϵ_p-PLDP."