Class-Incremental Learning (CIL) faces challenges with imbalanced data distribution, leading to skewed gradient updates and catastrophic forgetting. The proposed method reweights gradients to balance optimization and mitigate forgetting. Distribution-aware knowledge distillation loss aligns output logits with lost training data distribution. Experimental results show consistent improvements across various datasets and evaluation protocols.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Jiangpeng He... alle arxiv.org 02-29-2024
https://arxiv.org/pdf/2402.18528.pdfDomande più approfondite