This paper introduces Diverse Diffusion Augmentation (DDA), a novel method for Data-Free Knowledge Distillation (DFKD) that leverages diffusion models to enhance the diversity and quality of synthetic training data, leading to improved performance in compressing large models without access to original training data.
The core message of this paper is to introduce a novel causal inference perspective to handle the distribution shifts between the substitution and original data in the data-free knowledge distillation (DFKD) task, and propose a Knowledge Distillation Causal Intervention (KDCI) framework to de-confound the biased student learning process.
提案されたNoisy Layer Generation(NAYER)は、ランダムなノイズ入力からのサンプル生成における課題を解決し、高品質なサンプルを効率的に生成する方法を提供します。
Proposing NAYER, a novel method for efficient data-free knowledge distillation using noisy layer generation and meaningful label-text embeddings.