The paper presents a comprehensive empirical evaluation of the adversarial robustness of quantized neural networks (QNNs) targeting TinyML applications. The key findings are:
Quantization increases the average point distance to the decision boundary, making it more difficult for attacks to optimize over the loss surface. This can lead to the explosion or vanishing of the estimated gradient, a phenomenon known as gradient masking.
Adversarial examples crafted on full-precision ANNs do not transfer well to QNNs due to gradient misalignment and quantization-shift. Quantization can mitigate small perturbations but also amplify bigger perturbations.
While input pre-processing defenses show impressive denoising results for small perturbations, their effectiveness diminishes as the perturbation increases. Train-based defenses generally increase the average point distance to the decision boundary, even after quantization, but need to address quantization-shift and gradient misalignment.
The bit-width of quantization affects the adversarial robustness, with int-8 models being generally more robust than int-16 and float-32 models. Int-16 models are more affected by adversarial examples transferred from float-32 ANNs due to the enhanced effect of quantization-shift and gradient misalignment.
The authors provide a comprehensive evaluation suite including three QNNs, ten attacks, and six defenses, and make all artifacts open-source to enable independent validation and further exploration.
เป็นภาษาอื่น
จากเนื้อหาต้นฉบับ
arxiv.org
ข้อมูลเชิงลึกที่สำคัญจาก
by Miguel Costa... ที่ arxiv.org 04-09-2024
https://arxiv.org/pdf/2404.05688.pdfสอบถามเพิ่มเติม