Core Concepts
Integrating composition cues improves historical image retrieval.
Stats
In analyzing vast amounts of digitally stored historical image data, existing content-based retrieval methods often overlook significant non-semantic information, limiting their effectiveness for flexible exploration across varied themes.
Experimental results demonstrate that our CBIRNet, leveraging both composition and content information, can find images that are perceptually closer to the target image across various styles compared to networks relying solely on content-based retrieval.
Our model achieved an accuracy of 0.73, precision of 0.71, recall of 0.70, and an F1 score of 0.70 when evaluating the grayscale KU-PCP dataset.
Two metrics were employed to assess model performance: cosine embedding loss between positive samples and anchor images, as well as cosine similarity between anchor images and positive/negative samples.
Quotes
"By explicitly integrating composition-related information extracted by CNN into the designed retrieval model, our method considers both the image’s composition rules and semantic information."
"Our proposed Content-Based Image Retrieval Network (CBIRNet) merges composition information with content feature extraction."