Core Concepts
In machine learning, achieving adversarial robustness often comes at the cost of reduced accuracy, especially when dealing with non-smooth prediction functions or significant measurement noise.
Stats
Adversarial robustness is impossible unless the signal-to-noise ratio (SNRp) is low, when ǫ ≫ min(Cp^(1/p)/λ∗^(1/p) * SNRp^(-1/p), Cp/√λ∗ * SNRp^(1/(2p))).
For linear regression, adversarial robustness is impossible if ǫ ≫ √λ∗ / (||θ⋆||_Σ / σ), unless ||θ⋆||^2_Σ / σ^2 is low.
If p, Cp, and SNRp are constants independent of the dimension d, robustness against adversarial ℓ∞ perturbations of size greater than O(d^(−1/2)) cannot be guaranteed.
Quotes
"If no (nearly) optimal predictor is smooth, adversarial robustness comes at the cost of accuracy."
"The derived trade-off can be interpreted as the necessity of 𝜖 to be sufficiently small such that Lǫ(f) ≲ R⋆, to make R(f) + Rǫ(f) ≲ R⋆ possible."