Mitigating Unsafe Content Generation in Text-to-Image Models
SAFEGEN, a text-agnostic framework, can effectively mitigate the generation of sexually explicit content by text-to-image models, even under adversarial prompts, by removing unsafe visual representations from the model.