Główne pojęcia
The author proposes a novel approach for dense outlier detection and open-set recognition by training with noisy negative images, aiming to improve performance across various datasets. The shared features between semantic segmentation and outlier detection tasks greatly enhance the model's ability to recognize outliers without significantly impacting semantic segmentation accuracy.
Streszczenie
The content discusses a novel approach for dense outlier detection and open-set recognition by training with noisy negative images. The method aims to improve performance across various datasets by sharing features between semantic segmentation and outlier detection tasks. The proposed model shows competitive results in benchmarks such as WildDash 1, Fishyscapes, StreetHazard, and more.
Deep convolutional models often struggle with predicting inputs outside the training distribution, leading to increased interest in detecting outlier images. Unlike previous work, this study focuses on dense prediction context, using noisy negative samples from a general-purpose dataset to encourage the model to recognize outliers. By pasting jittered negative patches over inlier training images, the proposed approach shows promising results in dense open-set recognition benchmarks.
The study highlights the importance of improving datasets to include atypical images that challenge current models. It emphasizes the need for algorithms capable of recognizing image regions foreign to the training distribution. By training on mixed batches of inliers and negatives, stability is promoted during batchnorm statistics development.
Various experiments target different benchmarks like WildDash 1, Fishyscapes Lost and Found, StreetHazard, among others. The proposed approach outperforms existing methods by combining semantic segmentation with outlier detection tasks efficiently. Overall, the study provides valuable insights into enhancing dense prediction models for outlier detection.
Statystyki
Deep convolutional models often produce inadequate predictions for inputs which are foreign to the training distribution.
Our experiments target two dense open-set recognition benchmarks (WildDash 1 and Fishyscapes) and one dense open-set recognition dataset (StreetHazard).
Extensive performance evaluation indicates competitive potential of the proposed approach.
We train our models on inliers from Cityscapes train, Vistas train, and StreetHazard train.
We collect ImageNet-1k-bb by picking the first bounding box from the 544546 ImageNet-1k images with bounding box annotations.
We resize WD-Pascal and WD-LSUN images to 512 pixels.
We resize validation and test images to 768 pixels.
Our models are based on DenseNet-169 with ladder-style upsampling due to best overall validation performance.
We validate all hyperparameters on WD-Pascal and WD-LSUN.
Cytaty
"Our contribution is as follows: we propose a novel approach for dense outlier detection based on discriminative training with noisy negative images."
"We show that successful operation in dense prediction context requires random pasting of negative patches to inlier training images."
"Evaluation on two rigorous benchmarks indicates that our approach outperforms the state of the art."