A robust defense against data poisoning attacks in federated learning without compromising privacy, overfitting, or requiring prior knowledge of poisoned samples.