The paper first investigates the relationship between standard and clothes-changing (CC) learning objectives in CC-ReID. It is observed that the same-clothes discrimination as the standard ReID learning objective is persistently ignored in previous CC-ReID research, and there exists an inner conflict between these two objectives.
To address this, the paper proposes to synthesize high-fidelity clothes-varying samples using a Clothes-Changing Diffusion (CC-Diffusion) model. CC-Diffusion takes different clothes in the same dataset as controlling conditions and generates clothes-varying samples with consistent physical features from given persons. Quantitative experiments show the high synthesis quality and effective improvement on CC-ReID tasks.
However, introducing the synthetic CC data inevitably shifts the focus towards cloth-irrelevant clues and weakens the standard ReID objective. To mitigate the conflicts, the paper re-formulates the learning of CC-ReID as a multi-objective optimization (MOO) problem, where the standard and CC objectives are disentangled and optimized in a synergistic manner. By properly partitioning the training samples and designing the sampling strategies, the conflicting objectives are effectively regularized, and a set of Pareto optimal solutions are obtained. Furthermore, human preference vectors are introduced to ensure convergence to the desired balance between standard and CC-ReID.
The proposed framework is model-agnostic and demonstrates superior performance under both CC and standard ReID protocols, outperforming existing CC-ReID methods.
To Another Language
from source content
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Junjie Li,Gu... : arxiv.org 04-22-2024
https://arxiv.org/pdf/2404.12611.pdfDaha Derin Sorular