Belangrijkste concepten
An adaptive intra-class variation contrastive learning algorithm for unsupervised person re-identification that selects appropriate samples and outliers to dynamically update the memory dictionary based on the current learning capability of the model, leading to improved performance and faster convergence.
Samenvatting
The paper proposes an adaptive intra-class variation contrastive learning algorithm called AdaInCV for unsupervised person re-identification. The key contributions are:
-
AdaInCV utilizes the intra-class variations after clustering to assess the learning capability of the model for each class separately, allowing for the selection of appropriate samples during the training process.
-
Two new strategies are introduced:
- Adaptive Sample Mining (AdaSaM) enables the model to select samples of appropriate difficulty based on the learning ability of each cluster to update the memory.
- Adaptive Outlier Filter (AdaOF) utilizes the learning ability of the model across the entire dataset to select appropriate outliers as negative samples, enhancing contrastive learning.
-
Extensive experiments on two large-scale benchmarks (Market-1501 and MSMT17) demonstrate that the proposed AdaInCV outperforms previous state-of-the-art methods and significantly improves the performance of unsupervised person re-identification, while also accelerating the convergence speed.
Statistieken
The paper reports the following key statistics:
Market-1501 dataset consists of 32,668 annotated images of 1,501 identities, with 12,936 training images of 751 identities and 19,732 test images of 750 identities.
MSMT17 dataset consists of 126,441 bounding boxes of 4,101 identities, with 32,621 training images of 1,041 identities and 93,820 test images of 3,060 identities.
Citaten
"The memory dictionary-based contrastive learning method has achieved remarkable results in the field of unsupervised person Re-ID. However, The method of updating memory based on all samples does not fully utilize the hardest sample to improve the generalization ability of the model, and the method based on hardest sample mining will inevitably introduce false-positive samples that are incorrectly clustered in the early stages of the model."
"Clustering-based methods usually discard a significant number of outliers, leading to the loss of valuable information."