핵심 개념
새로운 클래스를 배우면서 이전 지식을 잊지 않는 방법
초록
Abstract:
Class Incremental Learning (CIL) faces challenges due to catastrophic forgetting.
Exemplar-free CIL is even more challenging due to forbidden access to previous task data.
Introduction:
DL models struggle with learning multiple tasks sequentially.
CIL aims to learn new class information without forgetting past knowledge.
Diagnosis: Domain Gaps in Exemplar-Free CIL:
Synthetic data has domain gaps compared to real data, affecting classification performance.
Methodology:
Finetuning Multi-Distribution Matching Diffusion Model with LoRA.
Forming Current Task Training Dataset.
Training with Multi-Domain Adaptation.
Experiment:
Evaluation on CIFAR100 and ImageNet100 datasets with incremental settings.
Results and Analysis:
Our method outperforms SOTA exemplar-free CIL methods with significant improvements.
Ablation Studies:
Multi-Distribution Matching, Multi-Domain Adaptation, and Selective Synthetic Image Augmentation are crucial components.
Conclusion:
Proposed method excels in mitigating catastrophic forgetting and balancing stability and plasticity.
통계
최근 딥러닝 모델은 다양한 작업에서 우수한 성능을 달성했지만, catastrophic forgetting은 계속적인 학습 능력을 제한한다.
Exemplar-Free CIL은 이전 작업 데이터에 대한 액세스를 금지하여 더 어려운 도전이 된다.
Exemplar-Free CIL은 실제 데이터와의 중요한 도메인 간격으로 인해 catastrophic forgetting을 극복하지 못한다.
인용구
"Our method excels previous exemplar-free CIL methods with non-marginal improvements and achieves state-of-the-art performance."