The content introduces CAKE, a novel approach for knowledge distillation without requiring access to original data. It highlights the effectiveness of CAKE in mimicking decision boundaries and generating synthetic samples. The method is compared with existing techniques and shows promising results across various datasets and model types.
Access to pre-trained models has become standard, but original training data may not be accessible. CAKE proposes a solution by generating synthetic samples that mimic decision boundaries effectively. The method is empirically validated on benchmark datasets, showcasing competitive classification accuracy.
CAKE's effectiveness is demonstrated through ablation studies on CIFAR-10, showing improvements in student accuracy with components like contrastive loss and prior knowledge injection. The method also proves successful in compressing models of different depths and types.
Furthermore, CAKE is compared against tailored methods, highlighting its ability to achieve comparable performance without common assumptions or data access requirements. Future research directions include exploring privacy-preserving methodologies using synthetic samples generated by CAKE.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問