Core Concepts
Utilizing score matching function for adaptive contrastive learning enhances representation diversity and performance across various methods.
Abstract
ScoreCL introduces a novel approach in contrastive learning by leveraging the score-matching function to measure augmentation differences. By adaptively weighting pairs based on score values, it boosts performance across CL methods like SimCLR, SimSiam, W-MSE, and VICReg. The method improves image classification on CIFAR and ImageNet datasets by up to 3%p. Extensive experiments validate the effectiveness of ScoreCL in diverse downstream tasks and with different augmentation strategies.
Stats
Recently, it has been verified that the model learns better representation with diversely augmented positive pairs because they enable the model to be more view-invariant.
We show the generality of our method, referred to as ScoreCL, by consistently improving various CL methods, SimCLR, SimSiam, W-MSE, and VICReg, up to 3%p in image classification on CIFAR and ImageNet datasets.
Leveraging the observed properties of DSM, we propose a simple but novel CL framework called “Score-Guided Contrastive Learning”, namely ScoreCL.
Through extensive experiments, we show that models trained with our method consistently outperform others - even with recent CL methods and augmentation strategies and a large-scale dataset.
To verify the generality of our approach to existing methods, we select four different types of methods as presented in [10]: SimCLR (Contrastive learning), SimSiam (Distillation methods), W-MSE (Information maximization methods), and VICReg (Joint embedding).
Quotes
"We hope our exploration will inspire more research in exploiting the score matching for CL."
"Our proposed methods make CL model focus on the difference between the views to cover a wide range of view diversity."
"Empirical evaluations underscore the consistent performance increase regardless of datasets, augmentation strategy or CL models."