The PRCL framework addresses the limitations of inaccurate pseudo-labels and prototype shifts in contrastive learning. It introduces probabilistic representations, global distribution prototypes, and virtual negatives to improve model robustness and performance. Extensive experiments demonstrate the effectiveness of the proposed method on public benchmarks.
The content discusses the challenges in semi-supervised semantic segmentation, introduces a novel PRCL framework, explains probabilistic representation modeling, global distribution prototypes, and virtual negatives generation. The methodology includes training objectives with supervised, unsupervised, and contrastive losses. Experiments on PASCAL VOC 2012 and Cityscapes datasets validate the superiority of the PRCL framework over existing methods.
Key points include addressing inaccurate pseudo-labels through probabilistic representations, maintaining prototype consistency with global distribution prototypes, compensating for fragmentary negative distributions with virtual negatives. The proposed method outperforms baselines and state-of-the-art approaches in various label rates on different datasets.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Haoyu Xie,Ch... alle arxiv.org 02-29-2024
https://arxiv.org/pdf/2402.18117.pdfDomande più approfondite