Attacking Provable Defenses Against Poisoning Attacks in High-Dimensional Machine Learning
The authors present a new attack called HIDRA that subverts the claimed dimension-independent bias bounds of provable defenses against poisoning attacks in high-dimensional machine learning settings. HIDRA highlights a fundamental computational bottleneck in these defenses, leading to a bias that scales with the number of dimensions.