Latent Guard: A Safety Framework for Detecting Unsafe Concepts in Text-to-Image Generation
Latent Guard is a framework designed to efficiently detect the presence of blacklisted concepts in text-to-image generation input prompts, enabling robust safety measures without the need for expensive retraining.