Adversarial Variational Graph Representation for Stealthy Model Poisoning Attacks on Federated Learning
The proposed VGAE-MP attack leverages an adversarial variational graph autoencoder to generate malicious local models solely based on the overheard benign local models, without requiring access to the training data. This enables the attack to effectively compromise the global model in federated learning while remaining stealthy and undetectable.