Privacy Backdoors: Stealing Training Data from Corrupted Pretrained Models
An attacker can tamper with the weights of a pretrained machine learning model to create "privacy backdoors" that enable the reconstruction of individual training samples used to finetune the model.