The paper considers the problem of differentially private stochastic convex optimization (DP-SCO) for heavy-tailed data. Most prior works on DP-SCO for heavy-tailed data either use gradient descent (GD) or perform multi-time clipping on stochastic gradient descent (SGD), which are inefficient for large-scale problems.
The authors propose a new algorithm called AClipped-dpSGD that uses a one-time clipping strategy on the averaged gradients. They provide a novel analysis to bound the bias and private mean estimation error of this clipping strategy.
For constrained and unconstrained convex problems, the authors establish new convergence results and improved complexity bounds for AClipped-dpSGD compared to prior work. They also extend the analysis to the strongly convex case and the non-smooth case (with Hölder-continuous gradients). All the results are guaranteed to hold with high probability for heavy-tailed data.
Numerical experiments are conducted to justify the theoretical improvements of the proposed algorithm over prior methods.
Sang ngôn ngữ khác
từ nội dung nguồn
arxiv.org
Thông tin chi tiết chính được chắt lọc từ
by Chenhan Jin,... lúc arxiv.org 09-11-2024
https://arxiv.org/pdf/2206.13011.pdfYêu cầu sâu hơn