The content discusses the intersection of out-of-distribution (OOD) detection and adversarial robustness in deep neural networks (DNNs). It introduces a taxonomy that categorizes existing work based on the nature of distributional shifts, including semantic shifts (e.g., anomaly detection, open set recognition, OOD detection) and covariate shifts (e.g., sensory anomaly detection, adversarial robustness, domain generalization).
The key focus is on two research directions at the intersection of OOD detection and adversarial robustness:
Robust OOD detection: Techniques that can accurately identify both clean and adversarially perturbed in-distribution (ID) and out-of-distribution (OOD) inputs. This includes outlier exposure-based methods, learning-based methods, score-based methods, and other approaches.
Unified robustness: Methods that aim to make DNNs robust against both adversarial and OOD inputs simultaneously, or detect both types of inputs using a unified approach. This includes data augmentation-based, learning-based, score-based, and other unified robust techniques.
The content provides a detailed analysis of the existing work in these two areas, highlighting their strengths, limitations, and potential future research directions.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問