The paper introduces DP algorithms for solving bilevel optimization problems, where the upper-level objective is smooth but not necessarily convex, and the lower-level problem is smooth and strongly-convex.
Key highlights:
Bilevel ERM Algorithm (Theorem 3.1): The authors present a (ε, δ)-DP first-order algorithm that outputs a point with hypergradient norm bounded by eO(K1√(dx/εn)^(1/2) + K2(√(dy/εn))^(1/3)), where K1 and K2 are problem-dependent constants. The algorithm also handles the case of a non-trivial constraint set X.
Mini-batch Bilevel ERM Algorithm (Theorem 4.1): The authors design a mini-batch variant of the previous algorithm that relies on mini-batch gradients. It ensures (ε, δ)-DP and outputs a point with hypergradient norm bounded by eO(K1√(dx/εn)^(1/2) + K2(√(dy/εn))^(1/3) + K3/bout), where K3 is another problem-dependent constant, and bout is the outer batch size.
Population loss guarantees (Theorem 5.1): The authors provide guarantees for stochastic (population) objectives, showing that their (ε, δ)-DP algorithm outputs a point with hypergradient norm bounded by eO(K1√(dx/εn)^(1/2) + K1(dx/n)^(1/2) + K2(√(dy/εn))^(1/3)), with an additional 1/bout factor in the mini-batch setting.
The key technical challenges addressed include: (1) privatizing the inner and outer optimization problems to avoid privacy leakage, (2) obtaining tight Lipschitz bounds on the approximate hyperobjective, and (3) analyzing the robustness of the outer loop to inexact and noisy gradients.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Guy Kornowsk... at arxiv.org 10-01-2024
https://arxiv.org/pdf/2409.19800.pdfDeeper Inquiries