Centrala begrepp
An innovative framework that utilizes language regularizer and subspace regularizer to seamlessly integrate new classes with limited data while preserving performance on base classes in a few-shot class incremental learning setting.
Sammanfattning
The paper introduces a novel framework for few-shot class incremental learning (FSCIL) that leverages language regularizer and subspace regularizer to address the challenges of integrating new classes with limited data while preserving performance on base classes.
Key highlights:
- The base model training incorporates a language regularizer that bridges the domain gap between image and text semantics, enabling the model to learn robust representations.
- The incremental training employs a semantic subspace regularizer that promotes new class representations to be in proximity to a convex combination of base classes, weighted by their semantic similarity.
- Comprehensive experiments on CIFAR-100, miniImageNet, and tieredImageNet datasets demonstrate the state-of-the-art performance of the proposed framework in both single-session and multi-session FSCIL settings.
- Ablation studies highlight the importance of the language regularizer and the effectiveness of different semantic representations, similarity measures, and hyperparameter choices in the framework.
The authors' approach effectively leverages the inherent semantic information from vision-language models to enhance the base model's adaptability and mitigate catastrophic forgetting, leading to superior performance in the few-shot class incremental learning scenario.
Statistik
The paper does not provide any specific numerical data or statistics in the main text. The results are presented in the form of quantitative comparisons with state-of-the-art FSCIL methods on various datasets.
Citat
The paper does not contain any direct quotes that are particularly striking or support the key logics.