Core Concepts
Domain adaptive object detection methods face benchmarking pitfalls, addressed by the ALDI framework for fair comparisons and state-of-the-art results.
Abstract
The content discusses the challenges in domain adaptive object detection (DAOD) due to benchmarking pitfalls. It introduces the Align and Distill (ALDI) framework to address these issues, providing a unified benchmarking protocol, a new dataset CFC-DAOD, and a method ALDI++ that achieves state-of-the-art results. The article covers network initialization impact, source augmentations, target augmentations, self-distillation techniques, feature alignment benefits, ablation studies on components of ALDI++, fair comparison with existing methods, and implications for DAOD research.
Structure:
Introduction to DAOD Challenges
Existing Methodological Themes in DAOD Research
Introduction of Align and Distill Framework (ALDI)
New Benchmark Dataset: CFC-DAOD
Proposed Method: ALDI++
Related Work Overview
Experiments Conducted with ALDI++
Ablation Studies on Components of ALDI++
Discussion on Findings and Conclusions
Stats
DAOD methods have doubled performance but face benchmarking pitfalls.
ALDI++ outperforms previous state-of-the-art by significant margins.
Source-only models improved with strong augmentations and EMA updates.
Quotes
"ALDI++ outperforms all prior work on CS → FCS, Sim10k → CS,
and CFC Kenai → Channel."
"Source-only models improved with strong augmentations lead to better performance."