Machine learning models can be challenging to interpret, leading to the emergence of Counterfactual Explanations (CEs) in eXplainable Artificial Intelligence (XAI). The UFCE methodology addresses limitations in current CE algorithms and aims to provide actionable explanations based on user feedback.
Graph-based algorithms offer efficient and meaningful counterfactual explanations for image classifiers.
Adversarial random forests (ARF) can be leveraged to efficiently generate plausible counterfactual explanations that are also sparse and proximal to the original instance.
A method leveraging robust optimization techniques to generate counterfactual explanations that are provably robust to model parameter changes and plausible with respect to the training data distribution.
This work presents a framework for generating feasible and sparse counterfactual explanations that satisfy causal constraints, enabling the production of actionable insights for real-world applications.