核心概念
FLiP enhances privacy in Federated Learning by employing local-global dataset distillation, adhering to the Principle of Least Privilege, which minimizes shared information to only what's essential for model training, thereby mitigating privacy risks.
统计
For every 5 additional distilled samples per category, the accuracy increases by an average of 0.12%, 2.172%, and 3.985% on MNIST, CIFAR-10, and CIFAR-100, respectively.
7 out of 12 task-irrelevant attribute inference attacks resulted in an accuracy of 0.5 or less, indicating effective defense against such attacks.
In membership inference attacks, FLiP achieved an attack accuracy of 49.75% compared to 59.14% for vanilla Federated Learning, demonstrating significant resistance against such attacks.