The paper presents a novel problem formulation that addresses the privacy-utility tradeoff in a scenario involving two distinct user groups, each with unique sets of private and utility attributes. Unlike previous studies that focused on single-group settings, this work introduces a collaborative data-sharing mechanism facilitated by a trusted third-party service provider.
The key highlights and insights are:
The proposed data-sharing mechanism does not require the third-party to have access to any auxiliary datasets or manually annotate the data. Instead, it leverages the data from the two user groups to train separate privacy mechanisms for each group.
The privacy mechanism is trained using adversarial optimization techniques, similar to existing approaches like ALFR and UAE-PUPET, but adapted for the two-group setting.
Experimental results on synthetic and real-world datasets (US Census) demonstrate the effectiveness of the proposed approach in achieving high accuracy for utility features while significantly reducing the accuracy of private feature predictions, even when analysts have access to auxiliary datasets.
The data-sharing mechanism is compatible with various existing adversarially trained privacy techniques, and the authors show that the UAE-PUPET technique outperforms ALFR within the proposed framework.
The paper also provides insights into the privacy-utility tradeoffs using information-theoretic measures like mutual information and established metrics like privacy leakage, utility performance, and privacy-utility tradeoff.
Visualization of the sanitized data in two-dimensional space further highlights the effectiveness of the proposed approach in preserving utility while obfuscating private features.
Başka Bir Dile
kaynak içeriğinden
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Bishwas Mand... : arxiv.org 04-09-2024
https://arxiv.org/pdf/2404.05043.pdfDaha Derin Sorular