Class-Incremental Learning (CIL) faces challenges with imbalanced data distribution, leading to skewed gradient updates and catastrophic forgetting. The proposed method reweights gradients to balance optimization and mitigate forgetting. Distribution-aware knowledge distillation loss aligns output logits with lost training data distribution. Experimental results show consistent improvements across various datasets and evaluation protocols.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Jiangpeng He... a las arxiv.org 02-29-2024
https://arxiv.org/pdf/2402.18528.pdfConsultas más profundas