Core Concepts
LVLMs face vulnerabilities and biases, necessitating robustness evaluation.
Abstract
AVIBench introduces a framework to assess LVLMs' robustness against various adversarial visual-instructions (AVIs), including image-based, text-based, and content bias AVIs. The study evaluates 14 open-source LVLMs and highlights inherent biases in advanced closed-source models like GeminiProVision and GPT-4V. Results emphasize the importance of enhancing LVLMs' security, fairness, and robustness.
Stats
AVIBench generates 260K AVIs encompassing five categories of multimodal capabilities and content bias.
MiniGPT-4 exhibits strong anti-corruption capability among LVLMs.
Elastic, Glass_Blur, and Shot_Noise are more effective image corruption methods.
Decision-based optimized image attacks show varying success rates across different LVLM capabilities.
TextFooler demonstrates high effectiveness in text-based AVIs.
LLaVA and OpenFlamingo-V2 perform well in detecting unsafe information and cultural biases.
Quotes
"Our findings shed light on the vulnerabilities of LVLMs."
"Inherent biases exist even in advanced closed-source LVLMs like GeminiProVision."
"Revealing model biases is a moral imperative that cannot be overlooked."