核心概念
Balancing plasticity and stability in exemplar-free incremental learning through a Dual-Learner framework with Cumulative Parameter Averaging.
統計資料
Despite linear growth in model storage with tasks in STL, averaging parameters can preserve knowledge.
DLCPA outperforms state-of-the-art exemplar-free baselines on CIFAR-100 and Tiny-ImageNet.
引述
"Averaging all STL feature extractors showcases potential in retaining knowledge across tasks."
"DLCPA achieves state-of-the-art performance on both exemplar-free IL datasets."