Core Concepts
The author introduces two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks, demonstrating that their approach produces tighter bounds than the state-of-the-art sampling-based methods.
Abstract
The content discusses the challenges in verifying the robustness of Bayesian Neural Networks (BNNs) compared to standard Neural Networks. It introduces two new algorithms, Pure Iterative Expansion (PIE) and Gradient-guided Iterative Expansion (GIE), to compute tighter bounds on probabilistic robustness. These algorithms are evaluated against the state-of-the-art on benchmarks like MNIST and CIFAR10, showing significant improvements in computing bounds up to 40% tighter. The paper also includes a theoretical comparison, an ablation study on gradient-based scaling factors, and an analysis of the number of iterations required for meaningful lower bound approximations.
Stats
Computing bounds up to 40% tighter than SoA.
MNIST and CIFAR10 benchmarks used.
Lower bounds approximated using PIE and GIE algorithms.
Quotes
"Our algorithms efficiently search the parameters’ space for safe weights by using iterative expansion and the network’s gradient."
"We introduce two new algorithms that produce sound lower bounds on the probabilistic robustness of BNNs."