LVLM-Intrepret: An Interpretability Tool for Understanding Large Vision-Language Models
LVLM-Intrepret is a novel interactive application designed to enhance the interpretability of large vision-language models by providing insights into their internal mechanisms, including image patch importance, attention patterns, and causal relationships.