Core Concepts
Generative models leave unique fingerprints on generated samples, aiding in model attribution and distinguishing between different generative processes.
Abstract
1. Abstract:
- Generative models leave fingerprints on generated samples.
- Definition of artifacts and fingerprints formalized.
- Proposed algorithm for computing fingerprints.
- Effectiveness in distinguishing generative models demonstrated.
2. Introduction:
- Importance of model attribution.
- Lack of studies on fingerprints for identifying different generative models.
- Proposed formal definitions and algorithm for computing fingerprints.
3. ManiFPT: Manifold-based Fingerprints of generative models:
- Definitions of artifacts and fingerprints in generative models.
- Estimation of artifacts and fingerprints.
- Theoretical justification of definitions.
4. Experiments:
- Hypothesis testing for fingerprints in generative models.
- Model attribution results.
- Feature space analysis.
- Cross-dataset generalization.
- Clustering structure analysis.
5. Conclusion:
- Addressing the problem of differentiating generative models.
- Proposed formal definitions of artifacts and fingerprints.
- Theoretical justification and practical usefulness demonstrated.
Stats
최근 연구에서 생성 모델이 생성된 샘플에 독특한 특징을 남김을 보여줌.
Durall et al. (2020)은 CNN 기반 생성 딥 신경망이 스펙트럼 분포를 올바르게 재현하지 못함을 보여줌.
Wang et al. (2020)은 CNN 기반 생성기가 이미지에 공통적인 지문을 남긴다고 가설함.
Quotes
"Our proposed definition provides a useful feature space for differentiating generative models."
"Our method outperforms existing methods on model attribution, generalizes better across datasets."
"The features learned using our artifact representations show much better separated clusters."