Alapfogalmak
Robust methods to accurately identify both out-of-distribution and adversarially perturbed inputs, even when they are designed to evade the out-of-distribution detector.
Kivonat
The content discusses the intersection of out-of-distribution (OOD) detection and adversarial robustness in deep neural networks (DNNs). It introduces a taxonomy that categorizes existing work based on the nature of distributional shifts, including semantic shifts (e.g., anomaly detection, open set recognition, OOD detection) and covariate shifts (e.g., sensory anomaly detection, adversarial robustness, domain generalization).
The key focus is on two research directions at the intersection of OOD detection and adversarial robustness:
Robust OOD detection: Techniques that can accurately identify both clean and adversarially perturbed in-distribution (ID) and out-of-distribution (OOD) inputs. This includes outlier exposure-based methods, learning-based methods, score-based methods, and other approaches.
Unified robustness: Methods that aim to make DNNs robust against both adversarial and OOD inputs simultaneously, or detect both types of inputs using a unified approach. This includes data augmentation-based, learning-based, score-based, and other unified robust techniques.
The content provides a detailed analysis of the existing work in these two areas, highlighting their strengths, limitations, and potential future research directions.
Statisztikák
"Deep neural networks (DNNs) deployed in real-world applications can encounter out-of-distribution (OOD) data and adversarial examples."
"Adversarial perturbations, while falling under the category of covariate shifts, stand apart from others due to their malicious intent in crafting imperceptible alterations."
"Adopting a single technique to strengthen a model's robustness against both adversarial and OOD inputs, termed henceforth as unified robustness, is important as it yields several benefits in real-world systems."
Idézetek
"Robust OOD Detection involves developing techniques capable of identifying and handling OOD inputs, even when they are subjected to adversarial modifications or perturbations aimed at evading the detector."
"Unified Robustness methods handle both adversarial and OOD inputs through a single approach. These methods either detect such distributional shifts using post-training mechanisms or enhance the model's robustness against these shifts during training."