Large Visual-Language Models Demonstrate Effectiveness in Multimodal Fake News Detection through In-Context Learning
Large Visual-Language Models (LVLMs) can be effectively used for multimodal fake news detection, achieving comparable and even exceeding the performance of smaller, specifically trained models when combined with in-context learning and insights from a smaller multimodal model.