Conceptos Básicos
COCA introduces a novel approach using textual prototypes to enhance few-shot learners in the SF-UniDA scenario.
Resumen
The COCA method addresses the challenges of source-free universal domain adaptation by utilizing textual prototypes. It focuses on classifier optimization rather than image encoder optimization, leading to improved model performance. The method involves autonomous calibration via textual prototype (ACTP) and mutual information enhancement by context information (MIECI) modules. Experiments demonstrate superior performance over existing UniDA and SF-UniDA models.
Introduction
- Universal domain adaptation aims to handle domain and category shifts.
- Source-free UniDA eliminates the need for direct access to source samples.
- Existing methods require extensive labeled source samples, leading to high labeling costs.
Methodology
- COCA utilizes textual prototypes for few-shot learners in SF-UniDA.
- ACTP module generates pseudo labels for self-training.
- MIECI module enhances mutual information by exploiting context information.
Model Optimization
- Training loss includes image loss, text loss, and mask loss.
- Decision boundary adaptation focuses on optimizing the classifier.
- Inference phase separates common and unknown class samples based on uncertainty.
Estadísticas
"Experiments show that COCA outperforms state-of-the-art UniDA and SF-UniDA models."