Transparency in AI diagnostics is crucial for reliable healthcare integration.
Statistical methods can provide a robust framework for defining, estimating, and evaluating explanations for black-box machine learning models, addressing key challenges in the field of explainability.
LUX is a rule-based explainer that can generate factual, counterfactual, and visual explanations for black-box machine learning models. It is based on a modified decision tree algorithm that uses SHAP-guided split node selection and oblique linear splits to provide simple, consistent, and representative explanations.
The proposed Symbolic XAI framework attributes relevance to logical formulas (queries) that express relationships between input features, providing a human-understandable explanation of the model's prediction strategy.
인공지능(AI) 설명 가능성에 대한 사용자 중심 평가 연구를 통해 기존 XAI 알고리즘의 한계와 사용자의 배경 지식에 따른 설명 이해도 차이를 확인했으며, 이를 바탕으로 다양한 사용자 그룹을 위한 새로운 XAI 설계 원칙 및 평가 기술의 필요성을 제시했습니다.