Core Concepts
Introducing a two-stage Plug and Play Active Learning strategy for object detection, combining uncertainty-based and diversity-based sampling to improve model performance.
Abstract
Annotating datasets for object detection is costly and time-consuming. The Plug and Play Active Learning (PPAL) strategy aims to minimize this burden by selecting informative samples within an annotation budget. PPAL consists of two stages: Difficulty Calibrated Uncertainty Sampling in the first stage re-weights uncertainties based on classification and localization difficulties, while Category Conditioned Matching Similarity in the second stage selects a diverse query set. PPAL outperforms previous methods on MS-COCO and Pascal VOC datasets across different detector architectures without modifying model structures or training pipelines.
Stats
"PPAL outperforms prior work by a large margin."
"Difficulty Calibrated Uncertainty Sampling leverages category-wise difficulty coefficients."
"Category Conditioned Matching Similarity computes similarities for multi-instance images."
"PPAL does not modify model architectures or training pipelines."