TroubleLLM proposes a novel approach to generate controllable test prompts for Large Language Models (LLMs) to assess safety issues effectively.