Quantization increases the average point distance to the decision boundary, making it more difficult for attacks to optimize over the loss surface. Quantization can act as a noise attenuator or amplifier, depending on the noise magnitude, and causes gradient misalignment. Train-based defenses increase adversarial robustness by increasing the average point distance to the decision boundary, but still need to address quantization-shift and gradient misalignment.
Quantization can significantly improve the efficiency of deep neural networks, but it can also impact their adversarial robustness. This study investigates the effects of different quantization pipeline components, including initialization parameters, training strategies, and bit-widths, on the adversarial robustness of quantized models.