Discovering Interpretable Latent Directions in Diffusion Models for Responsible Text-to-Image Generation
The core message of this work is to propose a self-discovery approach to find interpretable latent directions in the diffusion model's internal representation, which can be leveraged to enhance responsible text-to-image generation, including fair generation, safe generation, and responsible text-enhancing generation.