Conditional diffusion models struggle with out-of-distribution (OOD) features, leading to structural hallucinations in generated images. Our method alleviates this issue by performing separate diffusion processes for in-distribution (IND) and OOD regions, followed by a fusion module to produce coherent outputs.
StegoGAN leverages steganography to prevent the hallucination of spurious features when translating between image domains with non-bijective class mappings.