The article proposes DynaMMo, a method for efficient class incremental learning (CIL) in the medical imaging domain. CIL aims to enable models to continuously learn new classes (e.g., diseases) while retaining knowledge of previously learned classes, addressing the challenge of catastrophic forgetting.
The key aspects of DynaMMo are:
Adapter Tuning: DynaMMo employs lightweight, learnable adapter modules within a pre-trained CNN backbone to capture task-specific features for each new class. This allows the model to adapt to new tasks without significantly impacting performance on previous tasks.
Merging and Fine-tuning: After the adapter tuning stage, DynaMMo merges the task-specific adapters by averaging their weights. This reduces the computational overhead associated with dynamic-based CIL approaches, which typically require multiple forward passes during training and inference.
Balanced Fine-tuning: DynaMMo fine-tunes a single, unified classification head using a balanced set of samples from the current and previous tasks, further improving performance.
The authors evaluate DynaMMo on three publicly available datasets: CIFAR100, PATH16, and SKIN8. Compared to state-of-the-art CIL methods, DynaMMo achieves around a 10-fold reduction in GFLOPS while maintaining comparable or better classification performance.
Naar een andere taal
vanuit de broninhoud
arxiv.org
Belangrijkste Inzichten Gedestilleerd Uit
by Mohammad Are... om arxiv.org 04-23-2024
https://arxiv.org/pdf/2404.14099.pdfDiepere vragen