Red-teaming methodology uncovers safety vulnerabilities in text-to-image models through implicitly adversarial prompts.