Core Concepts
A novel brain-inspired framework for continual learning that distills and re-consolidates robust features to mitigate catastrophic forgetting.
Abstract
The paper introduces a novel brain-inspired framework for continual learning (CL) that comprises two key concepts: feature distillation and re-consolidation.
The feature distillation process distills continual learning (CL) robust features and rehearses them while learning the next task, aiming to replicate the mammalian brain's process of consolidating memories through rehearsing the distilled version of the waking experiences.
The feature re-consolidation focuses on re-distilling the CL-robust features, thereby enabling the incorporation of updated feature importance information for previous tasks after the model learns the current task. This process ensures recalibration of CL-robust features associated with previous tasks, thus accommodating the evolving dynamics of CL-robust features.
The proposed framework, called Robust Rehearsal, circumvents the limitations of existing CL frameworks that rely on the availability of pre-trained Oracle CL models to pre-distill CL-robustified datasets for training subsequent CL models.
Extensive experiments on CIFAR10, CIFAR100, and a real-world helicopter attitude dataset demonstrate that CL models trained using Robust Rehearsal outperform their counterparts' baseline methods. The experiments also assess the impact of changing memory sizes and the number of tasks, showing that the baseline methods employing robust rehearsal outperform other methods trained without robust rehearsal.
Finally, the paper explores the effects of various optimization training objectives within the realms of joint, continual, and adversarial learning on feature learning in deep neural networks. The findings indicate that the optimization objective dictates feature learning, which plays a vital role in model performance, further emphasizing the importance of rehearsing the CL-robust samples in alleviating catastrophic forgetting.
Stats
"Artificial intelligence and neuroscience have a long and intertwined history."
"Advancements in neuroscience research have significantly influenced the development of artificial intelligence systems that have the potential to retain knowledge akin to humans."
"CL models trained on robust features performed robustly under noisy and adversarial conditions, in contrast to the CL models trained on non-robust features."
"CL model trained on a pre-distilled CL-robustified dataset mitigates catastrophic forgetting, emphasizing the capacity of CL-robustified features in mitigating catastrophic forgetting."
Quotes
"Building upon foundational insights from neuroscience and existing research in adversarial and continual learning fields, we introduce a novel framework that comprises two key concepts: feature distillation and re-consolidation."
"The framework distills continual learning (CL) robust features and rehearses them while learning the next task, aiming to replicate the mammalian brain's process of consolidating memories through rehearsing the distilled version of the waking experiences."
"The feature re-consolidation focuses on re-distilling the CL-robust features, thereby enabling the incorporation of updated feature importance information for previous tasks after the model learns the current task."