The article introduces MSCoTDet, a framework that incorporates Large Language Models (LLMs) to improve multispectral pedestrian detection. It addresses the challenges of modality bias and dataset limitations by utilizing text descriptions and reasoning steps for accurate detection. The framework consists of a vision branch, a language branch, and a Language-driven Multi-modal Fusion (LMF) strategy. Experimental results demonstrate the effectiveness of MSCoTDet in improving performance on various datasets.
לשפה אחרת
מתוכן המקור
arxiv.org
תובנות מפתח מזוקקות מ:
by Taeheon Kim,... ב- arxiv.org 03-25-2024
https://arxiv.org/pdf/2403.15209.pdfשאלות מעמיקות