Feature augmentation, a technique for creating diverse training data in the feature space, significantly improves the performance and generalization of self-supervised contrastive learning models by enhancing view variance and mitigating data scarcity.
The core message of this paper is that contrastive learning can be significantly improved by curating the training batches to eliminate false positive and false negative pairs, which are caused by weak data augmentations. The authors propose a method based on the Fréchet ResNet Distance (FRD) to identify and discard "bad batches" that are likely to contain misleading samples, and a Huber loss regularization to further improve the robustness of the learned representations.
Self-supervised contrastive learning can be enhanced by incorporating local pivotal regions through a novel pretext task called Local Discrimination (LoDisc), leading to improved fine-grained visual recognition.
REBAR method uses retrieval-based reconstruction to identify positive pairs in time-series contrastive learning, achieving state-of-the-art performance.
提案された方法は、グローバルと重要なローカルレベルで識別的な特徴を学習する純粋な自己教師付きグローバル-ローカルファイングレインドコントラスト学習フレームワークを提供します。
Using the REBAR measure in time-series contrastive learning enables the identification of positive pairs based on motif similarity, leading to state-of-the-art performance in downstream tasks.