核心概念
Randomized smoothing with Lipschitz constant and margin tradeoff enhances certified robustness.
摘要
Real-life applications of deep neural networks face challenges with noisy inputs and adversarial attacks.
Certified radius crucial for model robustness, randomized smoothing offers noise injection framework.
Variance from Monte-Carlo sampling interacts closely with Lipschitz constant and margin of classifier.
Novel approach leverages variance-margin trade-off to increase certified robust radius.
Experimental results show significant improvement in certified accuracy compared to current methods.
Certification procedure allows use of pre-trained models with randomized smoothing for zero-shot improvement.
Deep neural networks vulnerable to adversarial attacks, Lipschitz continuity crucial for robust classifiers.
Randomized smoothing convolves base classifier with Gaussian distribution for increased robustness.
Margins play critical role in classifier robustness, larger margins associated with better generalization capabilities.
Proposed Lipschitz-Variance-Margin Randomized Smoothing (LVM-RS) balances MC variance and decision margin.
統計資料
Monte-Carlo sampling introduces variance that affects reliability.
Bernstein’s concentration inequality used to control risk factor α.
Empirical Bernstein inequality integrates empirical variance to manage risk α.