The study explores Chernoff differential privacy's significance in characterizing classifier performance. It focuses on Bayesian settings, comparing error exponents and ε−differential privacy. Chernoff differential privacy is re-derived using Radon-Nikodym derivative, showing composition property. Numerical evaluations demonstrate Chernoff outperforming Kullback-Leibler divergence in Laplace mechanisms for adversarial classification. Machine learning applications raise privacy concerns due to large datasets. Adversarial ML studies security attacks, introducing adversarial examples to deceive classifiers. Differential privacy ensures individual data privacy in statistical datasets. The paper addresses adversarial classification under DP using Chernoff information, comparing it with ε−DP and presenting numerical results.
Naar een andere taal
vanuit de broninhoud
arxiv.org
Belangrijkste Inzichten Gedestilleerd Uit
by Ayşe... om arxiv.org 03-18-2024
https://arxiv.org/pdf/2403.10307.pdfDiepere vragen