toplogo
Sign In

Leveraging Positive and Unlabeled Data for Contrastive Representation Learning


Core Concepts
The core message of this paper is to propose a novel contrastive learning objective, called puNCE, that can effectively leverage both the available positive labeled samples and the unlabeled samples to learn useful representations in the positive unlabeled (PU) learning setting, where only a few positive labeled samples and a set of unlabeled samples are available.
Abstract
The paper investigates the limitations of the standard self-supervised pretraining and finetuning approach on the weakly supervised positive unlabeled (PU) learning task. It observes that self-supervised contrastive losses are unable to leverage the available supervision and produce inferior representations compared to fully supervised approaches. To address this, the paper proposes a novel contrastive training objective called puNCE (positive unlabeled Noise Contrastive Estimation) that extends the standard self-supervised InfoNCE loss to the PU setting by incorporating the available biased supervision via appropriately re-weighting the samples. The key idea of puNCE is to treat each unlabeled sample as a positive example with probability π (the class prior) and as a negative example with probability (1-π). The labeled positive samples are given unit weight, while the unlabeled samples are duplicated, with one copy labeled positive and the other labeled negative, weighted by π and (1-π) respectively. This allows puNCE to leverage both the explicit supervision from the labeled positive samples and the implicit supervision from the unlabeled samples. The paper evaluates puNCE on three settings with limited supervision: (a) Positive Unlabeled Classification, (b) Binary semi-supervised (PNU) classification, and (c) Contrastive few-shot finetuning of a pretrained language model. Across all settings, puNCE consistently outperforms supervised and unsupervised contrastive losses, and is especially powerful when the available supervision is scarce. On the PU classification task, the paper shows that contrastive pretraining with puNCE results in large improvements over current state-of-the-art PU learning algorithms.
Stats
The paper presents several key statistics and figures to support the main claims: On PU CIFAR10 (vehicle/animal) with ResNet18 encoder, puNCE achieves over 3% improvement compared to InfoNCE when using 20% labeled data, and over 2% improvement when using only 5% labeled data. On PU MNIST (odd/even) with MLP encoder, puNCE improves over existing PU learning approaches by 8.9% when using only 1k (2%) positive labeled samples. On PNU CIFAR10 (vehicle/animal) with All-Conv encoder, puNCE improves over InfoNCE by 4.13% and over SCL by 1.39% when using only 1% labeled data. On SST2 sentiment classification with RoBERTa, puNCE achieves 2.97%, 0.93%, and 0.47% improvement over the previous state-of-the-art SCL approach when using 10, 100, and 1000 labeled samples respectively.
Quotes
"The core message of this paper is to propose a novel contrastive learning objective, called puNCE, that can effectively leverage both the available positive labeled samples and the unlabeled samples to learn useful representations in the positive unlabeled (PU) learning setting, where only a few positive labeled samples and a set of unlabeled samples are available." "The key idea of puNCE is to treat each unlabeled sample as a positive example with probability π (the class prior) and as a negative example with probability (1-π)." "Across all settings, puNCE consistently outperforms supervised and unsupervised contrastive losses, and is especially powerful when the available supervision is scarce."

Key Insights Distilled From

by Anish Achary... at arxiv.org 04-01-2024

https://arxiv.org/pdf/2206.01206.pdf
Positive Unlabeled Contrastive Learning

Deeper Inquiries

How can the puNCE objective be extended to handle more complex label structures beyond binary classification, such as multi-class or structured prediction problems

The puNCE objective can be extended to handle more complex label structures beyond binary classification by adapting the formulation to accommodate multi-class or structured prediction problems. In the case of multi-class classification, the puNCE objective can be modified to consider multiple positive classes and handle the implicit supervision from unlabeled samples accordingly. This can involve assigning appropriate weights to different classes based on the class priors and leveraging the available supervision to learn discriminative representations for each class. For structured prediction problems, the puNCE objective can be tailored to incorporate the specific dependencies and relationships between different parts of the output structure. By appropriately defining the positive and negative samples in the context of the structured prediction task, puNCE can be adapted to learn representations that capture the underlying structure of the data.

What are the theoretical guarantees and convergence properties of the puNCE objective compared to other PU learning approaches

Theoretical guarantees and convergence properties of the puNCE objective compared to other PU learning approaches can be analyzed in the context of optimization and generalization. The puNCE objective leverages both explicit (from labeled samples) and implicit (from unlabeled samples) supervision to learn representations, which can lead to improved generalization performance in scenarios with limited supervision. The convergence properties of puNCE can be studied in terms of optimization algorithms used to minimize the objective function. By analyzing the convergence behavior of the optimization process, one can assess the stability and efficiency of puNCE compared to other PU learning approaches. Theoretical guarantees can be established by considering the properties of the loss function, the optimization algorithm, and the underlying assumptions of the PU learning problem.

Can the ideas behind puNCE be applied to other weakly supervised learning settings beyond PU learning, such as semi-supervised learning or learning from noisy labels

The ideas behind puNCE can be applied to other weakly supervised learning settings beyond PU learning, such as semi-supervised learning or learning from noisy labels. In semi-supervised learning, puNCE can be adapted to handle scenarios where a small amount of labeled data is available along with a larger set of unlabeled data. By incorporating the available supervision from the labeled samples and leveraging the implicit supervision from the unlabeled samples, puNCE can learn representations that benefit from both sources of information. Additionally, in the context of learning from noisy labels, puNCE can be utilized to effectively utilize the available supervision while mitigating the impact of label noise. By appropriately weighting the samples based on the reliability of their labels, puNCE can learn robust representations in the presence of noisy labels. This adaptability showcases the versatility of the puNCE approach in various weakly supervised learning settings.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star