The paper presents a novel privacy amplification analysis for matrix mechanisms, which are a class of differentially private algorithms used in machine learning. The key contributions are:
Conditional Composition Theorem: The authors prove a conditional composition theorem that allows analyzing a sequence of adaptive mechanisms using high-probability instead of worst-case privacy guarantees for each mechanism. This generalizes previous ideas used to analyze amplification by shuffling.
MMCC Algorithm: The authors propose the MMCC algorithm, which computes nearly-tight amplified privacy guarantees for any matrix mechanism with uniform sampling. MMCC approaches a lower bound as the privacy parameter ε approaches 0.
Binary Tree Analysis: The authors show that the binary tree DP-FTRL mechanism can asymptotically match the noise added to DP-SGD with amplification, by leveraging the versatility of the conditional composition theorem.
Empirical Improvements: The authors demonstrate significant empirical improvements in the privacy-utility tradeoffs for DP-FTRL algorithms on standard benchmarks, by applying the MMCC analysis.
The paper tackles the challenge that standard privacy amplification analysis does not directly apply to matrix mechanisms, as the noise added to each row is correlated. The authors overcome this by reducing the analysis to a sequence of mixture of Gaussians mechanisms, which can be analyzed using their conditional composition theorem.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Christopher ... في arxiv.org 05-07-2024
https://arxiv.org/pdf/2310.15526.pdfاستفسارات أعمق