A deep ensemble learning framework that leverages both CNN and Transformer architectures to generate robust feature representations for occluded person re-identification.
A novel part-attention based model (PAB-ReID) is proposed to effectively address the challenges in occluded person re-identification by leveraging human parsing labels to generate accurate part attention maps, a fine-grained feature focuser to suppress background interference, and a part triplet loss to learn robust local features.
Leveraging textual prompts and hybrid attention mechanisms to generate well-aligned part features for occluded person re-identification, while preserving pre-trained knowledge to improve generalization.
This paper introduces DDRN, a novel generative model that enhances occluded person re-identification by reconstructing image features based on learned data distribution, effectively mitigating occlusion and background interference.