Core Concepts
Embedding adversarial robustness in dataset distillation through curvature regularization enhances model performance and robustness.
Abstract
Dataset distillation aims to reduce data size while maintaining utility. GUARD method incorporates curvature regularization for robust datasets. Theoretical analysis shows the importance of curvature in adversarial loss. GUARD outperforms other methods in both accuracy and robustness. Efficiency of GUARD reduces computational overhead. Transferability of GUARD to various dataset distillation methods is feasible.
Stats
Recent research focuses on improving accuracy of models trained on distilled datasets.
Dataset distillation synthesizes smaller datasets for high-performance models.
Adversarial robustness is crucial for trustworthy machine learning.
GUARD method incorporates curvature regularization into the distillation process.
Evaluation on ImageNette, Tiny ImageNet, and ImageNet datasets shows superiority of GUARD over standard adversarial training.
Quotes
"Dataset distillation allows for significant computational load savings while maintaining model accuracy."
"Our work bridges the gap between dataset distillation and adversarial robustness."
"GUARD's efficiency minimizes computational overhead in the distillation process."