The authors propose ExIFFI, a novel interpretability approach for the Extended Isolation Forest (EIF) algorithm, and introduce EIF+, an enhanced variant of EIF designed to improve generalization capabilities. The work aims to address the need for interpretable and effective anomaly detection models.
Tree-based anomaly detection ensembles are naturally suited for active learning, and the greedy querying strategy of seeking labels for instances with the highest anomaly scores is an efficient approach. Novel batch and streaming active learning algorithms are developed to improve the diversity of discovered anomalies and handle data drift, respectively.
Applying differential privacy (DP) to anomaly detection (AD) models significantly impacts their performance and explainability, with the trade-off varying across datasets and AD algorithms.
Contrastive learning inherently promotes a large norm for the contrastive features of in-distribution samples, creating a separation between in-distribution and out-of-distribution data in the feature space. This property can be leveraged to improve out-of-distribution detection by incorporating out-of-distribution samples into the contrastive learning process.