The article presents a framework for efficiently detecting Out-of-Distribution (OOD) samples in supervised and unsupervised learning contexts. The authors reframe the OOD detection problem as a statistical testing problem, where the goal is to test a null hypothesis that the test data comes from the same distribution as the training data, against an alternative hypothesis that the test data comes from a different distribution.
The authors propose using the Wasserstein distance as the test statistic, and derive theoretical guarantees on the power of the resulting OOD test. Specifically, they show that the test is uniformly consistent as the number of OOD samples goes to infinity, provided that the OOD distribution is sufficiently far from the in-distribution. They also derive non-asymptotic lower bounds on the test power, and discuss the limitations of the test when the OOD distribution is close to the in-distribution.
The authors compare the Wasserstein distance-based test to other OOD detection methods, such as those based on entropy and k-nearest neighbors, and argue that the Wasserstein distance has several advantages, including its ability to capture geometric information about the data distributions.
The article includes two experiments: one on a simple generative model example, and another on an image classification task using the MNIST and Fashion-MNIST datasets. The results demonstrate the effectiveness of the Wasserstein distance-based OOD test compared to other methods.
Till ett annat språk
från källinnehåll
arxiv.org
Djupare frågor