The paper proposes a method called DIAGNOSIS for detecting unauthorized data usage in the training or fine-tuning process of text-to-image diffusion models. The key idea is to plant unique behaviors, called injected memorization, into the models trained on the protected dataset by modifying the dataset. This is done by adding stealthy transformations (signal function) to a subset of the protected images. The models trained or fine-tuned on the modified dataset will memorize the signal function, which can then be detected using a binary classifier.
The paper defines two types of injected memorization: unconditional and trigger-conditioned. The former is always activated, while the latter is only activated when a specific text trigger is used. The paper then describes the overall pipeline, including the dataset coating phase and the detection phase.
Experiments are conducted on mainstream text-to-image diffusion models (Stable Diffusion and VQ Diffusion) with different training or fine-tuning methods (LoRA, DreamBooth, and standard training). The results show that DIAGNOSIS can effectively detect unauthorized data usage with 100% accuracy, while having a small influence on the generation quality of the models.
The paper also discusses the influence of different warping strengths and coating rates on the injected memorization and the generation quality. It compares DIAGNOSIS to an existing method and demonstrates its superior performance.
לשפה אחרת
מתוכן המקור
arxiv.org
תובנות מפתח מזוקקות מ:
by Zhenting Wan... ב- arxiv.org 04-10-2024
https://arxiv.org/pdf/2307.03108.pdfשאלות מעמיקות