Leveraging intra-layer representations and second-order feature statistics from pre-trained models can enhance performance and robustness in continual learning settings, without requiring access to past data.
Continual learning with pre-trained models has emerged as a promising approach to overcome the challenge of catastrophic forgetting, leveraging the strong generalization capabilities of pre-trained models.
This work proposes ICL-TSVD, a method that bridges the gap between empirical performance and theoretical guarantees in continual learning with pre-trained models. ICL-TSVD integrates the strengths of RanPAC into the Ideal Continual Learner framework and addresses the ill-conditioning of lifted features through continual SVD truncation, achieving both stability and strong performance.