Class-Incremental Learning (CIL) faces challenges with imbalanced data distribution, leading to skewed gradient updates and catastrophic forgetting. The proposed method reweights gradients to balance optimization and mitigate forgetting. Distribution-aware knowledge distillation loss aligns output logits with lost training data distribution. Experimental results show consistent improvements across various datasets and evaluation protocols.
Sang ngôn ngữ khác
từ nội dung nguồn
arxiv.org
Thông tin chi tiết chính được chắt lọc từ
by Jiangpeng He... lúc arxiv.org 02-29-2024
https://arxiv.org/pdf/2402.18528.pdfYêu cầu sâu hơn