Core Concepts
지속적으로 학습된 표현에서의 새로운 정보는 특징 잊힘과 지식 축적에 영향을 미칩니다.
Abstract
머신러닝 모델의 지속적 학습의 어려움과 특징 잊힘에 대한 문제를 다루는 논문
지식 축적과 특징 잊힘이 표현 품질에 미치는 영향을 실험을 통해 밝힘
다양한 지속적 학습 방법의 특징 잊힘과 지식 축적에 대한 비교
Stats
"Continual learning research has shown that neural networks suffer from catastrophic forgetting “at the output level”"
"new information is learned at the expense of forgetting earlier acquired knowledge"
"Models trained on all tasks concurrently (jointly) rather than sequentially can also lead to improved representations"
Quotes
"[...] in many commonly studied cases of catastrophic forgetting, the representations under naive fine-tuning approaches, undergo minimal forgetting, without losing critical task information" - Davari et al. (2022)
"there seems to be no catastrophic forgetting in terms of representations" - Zhang et al. (2022)
"Preventing feature forgetting is not only important for the performance on tasks that a model was trained on, but also to learn strong representations in general"