Belangrijkste concepten
Data reconstruction attacks can be used to train models effectively with leaked data from federated learning, despite challenges in reconstruction quality and label matching.
Statistieken
Gradient inversion attacks can breach privacy for a batch size of 100 on CIFAR-10.
Linear layer leakage attacks leak 78.93%, 76.61%, and 75.15% of images on CIFAR-10 for FC layer sizes of 4, 2, and 1 respectively.
Inverting Gradients on CIFAR-10 with batch size 4 takes 61.17 days to run on a NVIDIA A100 80GB GPU.
Citaten
"It is important to consider how far these leaked samples help in a downstream training task."
"Even poorly reconstructed images are useful for training."
"Leaked data from both gradient inversion and linear layer leakage attacks are able to train powerful models."