Effective Model Poisoning Attacks to Federated Learning via Consistent Malicious Updates
PoisonedFL, a novel model poisoning attack, leverages consistent malicious model updates across training rounds to substantially degrade the performance of the final global model, without requiring any knowledge about genuine clients' local training data or models.