The paper presents a novel approach called Weakly-Supervised Deep Hyperspherical Quantization (WSDHQ) for efficient image retrieval. The key highlights are:
WSDHQ is the first work to consider enhancing the weak supervision of tags for the task of image quantization. It builds a tag embedding correlation graph to effectively enhance tag semantics and reduce sparsity.
To reduce the error of deep quantization, WSDHQ removes the norm variance of deep features by applying ℓ2 normalization and maps visual representations onto a semantic hypersphere spanned by tag embeddings.
WSDHQ further improves the ability of the quantization model to preserve semantic information into quantization codes by designing a novel adaptive cosine margin loss and a novel supervised cosine quantization loss.
Extensive experiments show that WSDHQ can achieve state-of-the-art performance on weakly-supervised compact coding for image retrieval.
翻譯成其他語言
從原文內容
arxiv.org
深入探究