The COCA method introduces a new paradigm for tackling SF-UniDA challenges by utilizing textual prototypes to enhance few-shot learners' ability to distinguish common and unknown classes. By adapting the closed-set classifier, COCA outperforms existing UniDA and SF-UniDA models in experiments across various benchmarks.
The paper discusses the importance of minimizing labeling costs by utilizing VLMs for few-shot learning and zero-shot classifiers in the UniDA/SF-UniDA scenario. The proposed ACTP module generates pseudo labels through self-training, while the MIECI module enhances mutual information by exploiting context information in images.
COCA's approach focuses on adapting the decision boundary through classifier optimization, demonstrating superior performance in OPDA, OSDA, and PDA scenarios compared to state-of-the-art methods. The ablation studies highlight the stability and effectiveness of COCA with textual prototypes across different K values.
To Another Language
from source content
arxiv.org
Principais Insights Extraídos De
by Xinghong Liu... às arxiv.org 03-12-2024
https://arxiv.org/pdf/2308.10450.pdfPerguntas Mais Profundas