TX-Gen, a novel algorithm based on evolutionary multi-objective optimization, generates a diverse set of sparse and valid counterfactual explanations for time-series classification models while maintaining proximity to the original input.
Explanations, whether generated by humans or machines, can lead to over-reliance on incorrect AI predictions when the explanations are perceived as helpful, highlighting the dilemma of AI errors.
Knowledge graphs can be leveraged to generate data-driven concepts for concept-based explainable AI, providing robust and well-aligned explanations.
Decision Predicate Graphs (DPG) is a model-agnostic tool that converts opaque-box tree-based ensemble models into enriched graph structures, enabling comprehensive interpretation of the model's decision-making process through the use of graph theory concepts and metrics.
SHIELD regularization aims to improve model explainability and performance by selectively concealing input features and assessing the resulting discrepancy in predictions.
Faithfulness and plausibility can be complementary objectives in explainable AI, as traditional perturbation-based methods like Shapley value and LIME can achieve high levels of both accuracy and user accessibility in their explanations.