Bibliographic Information: Zhao, L., Zhang, X., Yan, K., Ding, S., & Huang, W. (2024). SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models. Advances in Neural Information Processing Systems, 38.
Research Objective: This paper introduces SAFE, a novel framework designed to address the limitations of existing continual learning methods that struggle to effectively transfer knowledge from pre-trained models (PTMs) and suffer from catastrophic forgetting.
Methodology: SAFE employs a dual-learner system: a slow learner (SL) and a fast learner (FL), both utilizing parameter-efficient tuning (PET). In the initial phase, the SL is trained to inherit general knowledge from the PTM through a knowledge transfer loss, maximizing feature correlation while minimizing redundancy. Subsequently, the SL parameters are frozen, and only the classification weights are updated. In contrast, the FL, with trainable parameters, focuses on incorporating new concepts in subsequent sessions, guided by the SL through feature alignment and cross-classification loss to mitigate forgetting. During inference, an entropy-based aggregation strategy dynamically combines the predictions of both learners, leveraging their complementary strengths.
Key Findings: Extensive experiments on seven benchmark datasets, including CIFAR100, ImageNet-R, ImageNet-A, CUB200, Omnibenchmark, VTAB, and DomainNet, demonstrate SAFE's superior performance. Notably, SAFE achieves state-of-the-art results, significantly surpassing the second-best method on ImageNet-A by 4.4%.
Main Conclusions: SAFE effectively addresses the stability-plasticity dilemma in continual learning by combining the strengths of slow and fast learners. The framework's ability to transfer knowledge from PTMs and adapt to new information without catastrophic forgetting makes it a significant contribution to the field.
Significance: SAFE offers a promising solution for developing more robust and adaptable continual learning systems, particularly in image recognition tasks. Its effectiveness in leveraging pre-trained models and mitigating forgetting opens up new possibilities for real-world applications where continuous learning is essential.
Limitations and Future Research: While SAFE demonstrates impressive results, it relies on a strong feature extractor inherited from the PTM, potentially limiting its applicability when starting from scratch or with small initial tasks. Future research could explore methods to enhance the framework's flexibility in such scenarios. Additionally, investigating alternative aggregation strategies and exploring the periodic updating of the slow learner could further improve SAFE's performance and adaptability.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Linglan Zhao... a las arxiv.org 11-05-2024
https://arxiv.org/pdf/2411.02175.pdfConsultas más profundas