Comprehensive Survey on Hallucination Challenges in Large Vision-Language Models
Hallucinations, or misalignment between visual content and textual generation, pose a significant challenge in the practical deployment of Large Vision-Language Models (LVLMs). This comprehensive survey aims to establish an overview of LVLM hallucinations and facilitate future mitigation efforts.