The article introduces the concept of Relative Safety Margins (RSMs) to compare the robustness of decisions made by two neural network classifiers (referred to as "twins") with the same input and output domains. The RSM of one classifier with respect to another reflects the relative margins with which decisions are made.
The authors propose a framework to establish safe bounds on RSMs and their generalization, Local Relative Safety Margins (LRSMs), which account for perturbed inputs within a given neighborhood. This allows them to formally verify whether one network makes the same decisions as another network, and to quantify the margins with which the decisions are made.
The authors evaluate their approach on the MNIST, CIFAR10, CHB-MIT Scalp EEG, and MIT-BIH Arrhythmia datasets. They investigate the effects of pruning, quantization, and knowledge distillation on LRSMs, and show that certain schemes can consistently degrade the quality of decisions made by the compact networks compared to the original networks.
เป็นภาษาอื่น
จากเนื้อหาต้นฉบับ
arxiv.org
ข้อมูลเชิงลึกที่สำคัญจาก
by Anahita Bani... ที่ arxiv.org 09-26-2024
https://arxiv.org/pdf/2409.16726.pdfสอบถามเพิ่มเติม