The AdaSAP method introduces a three-step algorithm that optimizes network sharpness to produce robust sparse networks. By incorporating weight perturbations strategically, the method prepares the network for pruning while improving robustness to unseen input variations. AdaSAP significantly enhances the relative robust accuracy of pruned models on image classification and object detection tasks across various compression ratios, outperforming recent pruning methods by large margins. The method focuses on addressing challenges related to network robustness and compression, emphasizing the importance of handling input variations unseen during training, especially in safety-critical applications like autonomous driving. Through a flatness-based optimization procedure, AdaSAP aims to balance sparsity and robustness in neural networks.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문