Core Concepts
Continual adversarial defense is crucial for ensuring the reliability of deep neural networks in real-world deployment scenarios, where new attacks can emerge in sequences. The authors propose a lifelong defense baseline called Anisotropic & Isotropic Replay (AIR) to alleviate catastrophic forgetting when adapting to new attacks.
Abstract
The content discusses the challenge of achieving continual adversarial robustness under attack sequences. It first verifies that adaptation to new attacks can lead to catastrophic forgetting of previous attacks. To address this issue, the authors propose AIR as a memory-free continual adversarial defense baseline.
Key highlights:
- Existing adversarial defense methods are limited to one-shot settings and cannot adapt to new attacks, resulting in insufficient robustness for potential attack sequences.
- The authors validate the catastrophic forgetting of standard adversarial training under attack sequences, highlighting the need for continual adversarial defense.
- AIR combines isotropic and anisotropic data augmentation to alleviate catastrophic forgetting within a self-distillation pseudo replay paradigm.
- The isotropic augmentation helps break specific adversarial patterns, while the anisotropic mix-up augmentation provides richer fusion semantics.
- A regularizer is introduced to optimize the trade-off between plasticity and stability by aligning the hidden-layer features of new and pseudo-replay attacks.
- Experiments demonstrate that AIR can approximate or even exceed the empirical performance upper bounds achieved by Joint Training, a commonly used continual learning method.