LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations
Core Concepts
LeOCLR introduces a new approach to contrastive instance discrimination, improving representation learning by leveraging original images.
Abstract
Self-supervised learning heavily relies on data augmentations like random cropping.
LeOCLR addresses issues with semantic content in positive pairs during contrastive learning.
The framework ensures correct semantic information in shared regions between views.
Experimental results show consistent improvement in representation learning across datasets compared to baseline models.
LeOCLR
Stats
Contrastive instance discrimination outperforms supervised learning in downstream tasks like image classification and object detection.
Random cropping followed by resizing is a common form of data augmentation used in contrastive learning.
The experimental results show that our approach consistently improves representation learning across different datasets compared to baseline models.
Quotes
"Creating positive pairs by random cropping and encouraging the model to bring these two views closer in the latent space based on the information in the shared region between the two views makes the SSL model task harder and improves representation quality."
"Our approach consistently enhances visual representation learning for contrastive instance discrimination across different datasets and transfer learning scenarios."