OPERA introduces a novel decoding method to alleviate hallucination issues in multi-modal large language models by implementing an Over-trust Penalty and Retrospection-Allocation strategy. The approach aims to reduce hallucinations without the need for additional data, knowledge, or training.
OPERA introduces a novel decoding method to reduce hallucinations in MLLMs without additional data or training.