Stabilizing Generative Model Training by Correcting Synthetic Data
Introducing self-correction functions that map synthesized data to be more likely under the true data distribution can exponentially stabilize self-consuming generative model training.