A novel quality-aware and incentive-boosted federated learning framework based on ρ-zero-concentrated differential privacy (ρ-zCDP) to incentivize participation of mobile devices with high-quality data and eliminate privacy threats associated with gradient disclosure.
In high-dimensional settings, accurate estimation is not feasible under the untrusted central server constraint in federated learning, even for simple sparse mean estimation problems. However, in the trusted central server setting, novel algorithms can achieve near-optimal estimation and inference results.
The CorBin-FL and AugCorBin-FL mechanisms achieve differential privacy guarantees in federated learning by using correlated binary stochastic quantization of local model updates.
This paper introduces secure stateful aggregation, a novel protocol enabling efficient and private federated learning with correlated noise, addressing the limitations of traditional secure aggregation methods in DP-FTRL algorithms.