Continual Learning: Evaluating Hyperparameters for CL Algorithms
核心概念
The evaluation protocol for continual learning algorithms should involve Hyperparameter Tuning and Evaluation phases to accurately assess their CL capability.
要約
Various CL algorithms aim to balance stability and plasticity during the learning process.
The current evaluation protocol involves tuning hyperparameters on benchmark datasets, leading to overfitting and impracticality.
Proposed protocol includes two phases: Hyperparameter Tuning and Evaluation, using different datasets but the same CL scenario.
Experimental results show that some state-of-the-art algorithms exhibit inferior performance compared to older ones in the proposed protocol.
Additional analysis on model size and training time reveals efficiency issues in certain algorithms despite better CL capacity.
Hyperparameters in Continual Learning
統計
"In recent years, extensive research has been conducted on continual learning (CL) to effectively adapt to successive novel tasks while overcoming catastrophic forgetting for previous tasks [23]."
"Various CL algorithms tailored for successful CL in classification offer novel approaches to balance stability and plasticity during the CL process."
"Despite differing approaches in these three categories, they inevitably require introducing additional hyperparameters for their algorithm."
引用
"Returning to the fundamental principles of model evaluation in machine learning, we propose an evaluation protocol that involves Hyperparameter Tuning and Evaluation phases."
"This highlights the necessity of adopting the proposed protocol for a more comprehensive evaluation of CL algorithms."