Optimal Distributed Learning Under Data Poisoning and Byzantine Failures
The best learning guarantees that a first-order distributed algorithm can achieve under the Byzantine failure threat model are optimal even in the weaker data poisoning threat model. Furthermore, fully-poisonous local data is a stronger adversarial setting than partially-poisonous local data in distributed ML with heterogeneous datasets.