核心概念
Safe-Error Attacks using fault injection can effectively extract embedded neural network models on 32-bit microcontrollers, even with limited training data, by exploiting the relationship between injected faults and prediction variations.
統計資料
For CNN, 14.23% of "Certain" inputs did not lead to any bit recovery.
For MLP, "Uncertain" inputs extracted 64 times more bits on average than "Certain" inputs (8438 vs. 131).
The attack recovered 80% and 90% of the MSB of the CNN parameters with only 150 and 1500 crafted inputs, respectively.
The LSBL principle increased the rate of recovered bits from 47.05% to 80.1% for the CNN model with 5000 crafted inputs.
The recovery error for bits estimated by LSBL decreased to under 1% with only 150 and 300 inputs for CNN and MLP, respectively.
Without training, using only recovered bits, the substitute model achieved a low accuracy of 26.02% for CNN and 75.78% for MLP.
With 90% of MSB recovered, the substitute models achieved 75.27% accuracy for CNN and 92.93% for MLP, with fidelity rates of 85.58% and 96.44%, respectively.
The Accuracy Under Attack (AUA) for the victim models, when using adversarial examples crafted on the substitute models, was 1.83% for CNN and 0% for MLP.
In practical experiments on an ARM Cortex-M3 platform, 90% of MSB were recovered using only 15 crafted inputs.
引述
"Our work is the first to demonstrate that a well-known attack strategy against cryptographic modules is possible and can reach consistent results regarding the state-of-the-art."
"This work aims at demonstrating that this two-step methodology is actually generalizable to another type of platforms, i.e. 32-bit microcontrollers, with a different fault model (bit-set) and exploitation methods (SEA and input crafting)."
"Our results demonstrate a high rate of recovered bits for both models thanks to SEA associated to the LSBL principle. In the best case, we can estimate about 90% of the most significant bits."
"This research highlights the vulnerability of embedded machine learning models to physical attacks, particularly in the context of increasing deployment of these models in resource-constrained devices."