A Novel and Universal Fuzzing Framework for Proactively Discovering Jailbreak Vulnerabilities in Large Language Models
FuzzLLM, an automated fuzzing framework, can proactively test and discover jailbreak vulnerabilities in Large Language Models (LLMs) by generating diverse prompts that exploit structural and semantic weaknesses.