The core message of this paper is to propose a novel unsupervised framework for visible-infrared person re-identification (VI-ReID) that addresses the challenges of noisy pseudo-labels and large modality gaps between visible and infrared images.
An adaptive intra-class variation contrastive learning algorithm for unsupervised person re-identification that selects appropriate samples and outliers to dynamically update the memory dictionary based on the current learning capability of the model, leading to improved performance and faster convergence.