核心概念
Enhancing privacy in image generation through differentially private latent diffusion models.
要約
Differentially private latent diffusion models (DP-LDMs) aim to improve the privacy-utility tradeoff in image generation. By fine-tuning only the attention modules of pre-trained LDMs with DP-SGD, a better privacy-accuracy balance is achieved. This approach reduces trainable parameters by approximately 90%, leading to more efficient training and democratizing DP image generation. The method allows for generating high-quality images conditioned on text prompts with DP guarantees, a novel attempt not previously explored. The research showcases promising directions for training powerful yet efficient differentially private DMs, producing high-quality images across various datasets.
統計
Existing privacy-enhancing techniques for DMs do not provide a good privacy-utility tradeoff.
Fine-tuning only the attention modules of LDMs with DP-SGD reduces the number of trainable parameters by roughly 90%.
The approach allows for generating realistic, high-dimensional images (256x256) conditioned on text prompts with DP guarantees.
引用
"A flurry of recent work highlights the tension between increasingly powerful diffusion models and data privacy."
"To address this challenge, a recent paper suggests pre-training DMs with public data, then fine-tuning them with private data using DP-SGD for a relatively short period."
"Our approach provides a promising direction for training more powerful, yet training-efficient differentially private DMs."