核心概念
Efficiently leveraging Lipschitz constant and variance to enhance certified robustness in neural networks.
摘要
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks. The certified radius is crucial for model robustness. Randomized smoothing introduces noise injection to create a smoothed and robust classifier. The interplay between the Lipschitz constant, margin, and variance impacts the certified robust radius significantly. By optimizing simplex maps and Lipschitz bounds, the Lipschitz-Variance-Margin Randomized Smoothing (LVM-RS) procedure achieves state-of-the-art results in improving certified accuracy compared to existing methods.
統計資料
Experimental results show a significant improvement in certified accuracy compared to current state-of-the-art methods.
Certified accuracy on CIFAR-10: 52.56% at ε=0.0, 46.17% at ε=0.25, 39.09% at ε=0.5.
Certified accuracy on ImageNet: 80.66% at ε=0.0, 69.84% at ε=0.5, 53.85% at ε=1.
引述
"We introduce a different way to convert logits to probability vectors for the base classifier to leverage the variance-margin trade-off."
"Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius."
"Our research encompasses contributions using Gaussian-Poincaré’s inequality and Empirical Bernstein inequality to control risk α."