Core Concepts
This research paper introduces KOPPA, a novel approach to enhance prompt-based continual learning by mitigating semantic drift and improving classification head distinction using key-query orthogonal projection and a prototype-based One-Versus-All (OVA) component.
Stats
KOPPA surpasses other prompt tuning methods, achieving a performance advantage exceeding 20% on S-Imagenet-R-20.
Rehearsal-based methods lag significantly behind KOPPA, with the largest gap reaching up to 38.84%.
KOPPA demonstrates a reduced forgetfulness rate, with the numbers being nearly halved compared to CODA.
KOPPA consistently indicates slighter feature shifts compared to CODA, as measured by Wasserstein distance.
KOPPA outperforms CODA+OVA, further affirming its efficacy in reducing feature shift.
Without the OVA head, conventional classification (CE only) yields significantly lower results, with an approximate 10% reduction in accuracy.
Using OVA exclusively for learning leads to notable forgetting and a substantial decline in accuracy.
The utilization of the OVA head in KOPPA aids the model in selecting the appropriate classification head.
An increase in the number of prototypes generally leads to improved performance of the OVA head.
The integration of the OVA head is more beneficial in scenarios where tasks involve limited data.