Core Concepts
A novel method called Contrastive Mean Teacher (COMET) that tackles the challenging task of online source-free universal domain adaptation by leveraging contrastive learning and entropy optimization.
Abstract
The content discusses the task of online source-free universal domain adaptation (online SF-UniDA), which aims to adapt a pre-trained source model to a target domain in an online manner without accessing the source data. This is a realistic but challenging scenario that has not been well studied despite its practical relevance.
The authors propose a novel method called Contrastive Mean Teacher (COMET) to address this task. COMET has the following key components:
Pseudo-labeling: COMET uses a mean teacher framework to generate reliable pseudo-labels for the target samples, leveraging entropy thresholds to handle samples of unknown classes.
Contrastive loss: COMET applies a contrastive loss to rebuild a feature space where samples of known classes form distinct clusters and samples of unknown classes are clearly separated from them.
Entropy loss: COMET uses an entropy loss to ensure that the classifier output has a small entropy for samples of known classes and a large entropy for unknown samples, enabling reliable rejection of unknown samples during inference.
The authors extensively evaluate COMET on two domain adaptation datasets, DomainNet and VisDA-C, across different category shift scenarios (partial-set, open-set, open-partial-set). COMET consistently outperforms competing methods and sets an initial benchmark for online SF-UniDA.
Stats
The DomainNet dataset consists of about 0.6 million images of 345 classes across three domains: painting, real, and sketch.
The VisDA-C dataset has 12 object classes and features a challenging domain shift from synthetic 2D renderings to real-world images.
Quotes
"We are the first to study the realistic but challenging task of online SF-UniDA."
"We propose COMET, a method tackling this difficult task by applying a combination of contrastive learning and entropy optimization embedded into a mean teacher."