Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
The author argues that model-induced distribution shifts (MIDS) can lead to loss in performance, fairness, and minoritized group representation, even in initially unbiased datasets. They propose algorithmic reparation (AR) as a framework to counter the injustices of MIDS and promote equity.