Core Concepts
This survey provides a comprehensive overview of the research on quantitative assessment of neural network robustness in image recognition, covering concepts, metrics, and assessment methods.
Abstract
This survey presents a detailed examination of the robustness assessment of neural networks in image recognition tasks. It covers the following key aspects:
Robustness Concepts:
Analyzes the definition of robustness for AI systems and its relationship with other quality characteristics like trustworthiness, reliability, and security.
Discusses the specific concepts of robustness for neural networks, including local vs. global robustness, adversarial robustness, corruption robustness, semantic robustness, pointwise robustness, robustness bounds, probabilistic robustness, and targeted robustness.
Robustness Metrics:
Summarizes the metrics used to measure the robustness of neural networks, including local and global robustness metrics.
Examines the various techniques employed to measure the magnitude of image perturbations and represent the perturbation range.
Robustness Assessment Methods:
Reviews the verification and testing techniques used for robustness assessment, including formal verification, statistical verification, adversarial testing, and benchmark testing.
Discusses the strengths, limitations, and applicability of these methods in practical scenarios.
Challenges and Future Directions:
Identifies open challenges and potential future research directions in neural network robustness assessment, such as the need for standardized certification processes and effective benchmarks.
The survey provides a comprehensive and structured understanding of the current state of research on neural network robustness assessment in image recognition.
Stats
"Deep learning introduces new failure mechanisms and modes to traditional systems, presenting challenges in evaluating and assuring the quality of intelligent systems."
"The nonlinear and nonconvex behavior of deep neural networks makes their robustness problem serious and difficult to evaluate."
"Two main types of assessment methods are employed for evaluating DNN robustness: robustness verification and robustness testing."
Quotes
"Robustness plays a critical role in ensuring reliable operation of artificial intelligence (AI) systems in complex and uncertain environments."
"The presence of adversarial samples in image classification neural networks highlighted the vulnerability of deep learning models to small input perturbations, which can lead to significant output deviations."
"Existing methods are primarily proposed for adversarial attacks and AR, aiming to identify the minimum perturbation degree that misleads the model output and serve it as the measurement of robustness."