Centrala begrepp
Dataset distillation methods exhibit improved robustness, with potential for enhancing model training.
Sammanfattning
In this work, a benchmark is introduced to evaluate the adversarial robustness of distilled datasets. The study covers various dataset distillation methods, adversarial attack techniques, and large-scale datasets. Results show that distilled datasets generally display better robustness than original datasets, with robustness decreasing as the number of images per class (IPC) increases. Incorporating distilled images into training batches enhances model robustness, acting as a form of adversarial training. The paper provides new insights into evaluating dataset distillation and suggests future research directions.
Directory:
- Introduction to Dataset Distillation
- Dataset distillation compresses datasets while maintaining performance.
- Importance of Adversarial Robustness Evaluation
- Existing works focus on accuracy but overlook robustness.
- Proposed Benchmark for Adversarial Robustness Evaluation
- Extensive evaluations using state-of-the-art methods and attacks.
- Frequency Domain Analysis of Distilled Data
- Investigating frequency characteristics to understand knowledge extraction.
- Enhancing Model Robustness with Distilled Data
- Incorporating distilled images improves model robustness.
Statistik
"Our investigation of the results indicates that distilled datasets exhibit better robustness than the original datasets in most cases."
"Models trained using distilled CIFAR-10, CIFAR-100, and TinyImageNet datasets demonstrate superior robustness compared to those trained on the original dataset."