Comprehensive Evaluation of Jailbreak Attacks on Large Language Models and Multimodal Large Language Models
This study provides a comprehensive evaluation of the robustness of both proprietary and open-source large language models (LLMs) and multimodal large language models (MLLMs) against various jailbreak attack methods targeting both textual and visual inputs.