核心概念
Neural networks face vulnerabilities in explainable AI methods due to adversarial attacks, but defenses can mitigate these risks.
統計資料
It poses a serious challenge in ensuring the reliability of XAI methods.
Achieving an approximate decrease of 99% in the Attack Success Rate (ASR) and a 91% reduction in the Mean Square Error (MSE).
Over recent years, a variety of methods have been proposed to explain these decisions.
引述
"The method we suggest defences against most modern explanation-aware adversarial attacks."
"To ensure the reliability of XAI methods poses a real challenge."