Основные понятия
Abductive explanations (AXp's) are widely used for understanding decisions of classifiers. However, ignoring constraints between features may lead to an explosion in the number of redundant or superfluous AXp's. The authors propose three new types of explanations that take into account constraints and can be generated from the whole feature space or from a sample. They analyze the complexity of finding an explanation and investigate its formal properties.
Аннотация
The paper discusses the problem of efficiently processing and analyzing content for insights, focusing on abductive explanations (AXp's) of classifier decisions under constraints.
Key highlights:
- Existing definitions of AXp's are suitable when features are independent, but ignoring constraints between features may lead to an explosion in the number of redundant or superfluous AXp's.
- The authors propose three new types of explanations that take into account constraints: coverage-based prime-implicant explanation (CPI-Xp), minimal CPI-Xp (mCPI-Xp), and preferred CPI-Xp (pCPI-Xp).
- The authors analyze the complexity of finding an explanation and show that taking into account constraints increases the computational complexity compared to the unconstrained case.
- To make the solutions feasible, the authors propose a sample-based approach, where explanations are generated from a sample of the feature space instead of the entire space.
- The authors introduce desirable properties for explanation functions and analyze the different types of explanations against these properties.
- The results show that the explainer that generates preferred CPI-Xp is the only one that satisfies all the desirable properties.
Цитаты
"Abductive explanations (AXp's) are widely used for understanding decisions of classifiers."
"Ignoring constraints when they exist between features may lead to an explosion in the number of redundant or superfluous AXp's."
"Coverage is powerful enough to discard redundant and superfluous AXp's."