Class-Incremental Learning (CIL) faces challenges with imbalanced data distribution, leading to skewed gradient updates and catastrophic forgetting. The proposed method reweights gradients to balance optimization and mitigate forgetting. Distribution-aware knowledge distillation loss aligns output logits with lost training data distribution. Experimental results show consistent improvements across various datasets and evaluation protocols.
Til et annet språk
fra kildeinnhold
arxiv.org
Viktige innsikter hentet fra
by Jiangpeng He... klokken arxiv.org 02-29-2024
https://arxiv.org/pdf/2402.18528.pdfDypere Spørsmål