Główne pojęcia
The author explores the use of image compression to enhance memory buffer capacity and exemplar diversity in continual machine learning systems.
Streszczenie
Image compression is investigated as a strategy to improve memory buffer capacity and exemplar diversity in continual machine learning. The study addresses domain shift issues, proposes a new framework for compression rate selection, and conducts experiments on CIFAR-100 and ImageNet datasets. Results show significant improvements in image classification accuracy under continual ML settings.
Statystyki
Image compression reduces file size to enhance data storage capacity.
Memory replay-based algorithms mitigate catastrophic forgetting effects.
Compressed exemplars introduce domain shift during continual ML.
Data rate determines equivalent exemplar size for CIL methods.
Pre-processing aligns data characteristics between training and testing phases.
Feature MSE helps select the best compression method for exemplar configuration.
Cytaty
"Memory replay involves retaining selected exemplars from previous classes within a defined memory budget."
"Compression allows more previously seen class data in the memory buffer, promoting a balanced training set."
"Compression as a pre-processing step mitigates potential domain shift issues during testing."