Privacy Backdoors: Amplifying Membership Inference Attacks through Poisoning Pre-trained Models
Adversaries can poison pre-trained models to significantly increase the success rate of membership inference attacks, even when victims fine-tune the models using their own private datasets.