This paper explores the connection between two recently identified phenomena in deep learning - plasticity loss and neural collapse. The authors analyze their correlation in different scenarios, revealing a significant association during the initial training phase on the first task. They also introduce a regularization approach to mitigate neural collapse, demonstrating its effectiveness in alleviating plasticity loss in this specific setting.
The key findings are:
In a continual learning scenario, the onset of plasticity loss prevents the model from reaching neural collapse, as indicated by the negative correlation between the two metrics.
When the model is able to overfit on the first task, a strong positive correlation between neural collapse and plasticity loss is observed, though this correlation diminishes as training on the first task progresses.
The authors were able to leverage neural collapse regularization to influence plasticity loss, suggesting a potential causal relationship between the two phenomena.
The paper highlights the complex interplay between neural collapse and plasticity loss, influenced by various factors such as network size, optimization schedules, and task similarity. The authors emphasize the need for thorough exploration of these variables in future studies to better understand the relationship between these two deep learning phenomena.
翻譯成其他語言
從原文內容
arxiv.org
深入探究