Core Concepts
Using synthetic images of future classes generated by pre-trained text-to-image diffusion models can significantly improve the performance of exemplar-free class incremental learning methods relying on a frozen feature extractor.
Abstract
The content discusses a novel approach called Future-Proof Class Incremental Learning (FPCIL) to address the limitations of exemplar-free class incremental learning (EFCIL) methods that rely on a frozen feature extractor.
Key highlights:
- EFCIL methods are highly dependent on the data used to train the feature extractor during the initial step, and may struggle when the number of classes is limited.
- FPCIL leverages pre-trained text-to-image diffusion models to generate synthetic images of future classes and uses them jointly with the current dataset to train the feature extractor during the initial step.
- Experiments on CIFAR100 and ImageNet-Subset show that FPCIL can significantly improve the performance of state-of-the-art EFCIL methods, especially in the most challenging settings where the initial step contains few classes.
- Using synthetic images of future classes achieves higher performance than using real images from different classes, demonstrating the benefits of this approach.
Stats
The content does not contain any key metrics or important figures to support the author's key logics.
Quotes
The content does not contain any striking quotes supporting the author's key logics.