Core Concepts
Incremental XAI provides more detailed and memorable explanations of AI predictions by partitioning the data into typical and outlier cases, and incrementally explaining the outliers based on a base explanation for the typical cases.
Abstract
The paper proposes Incremental XAI, a framework for providing explanations of AI predictions that are more detailed and memorable for users. The key ideas are:
Partitioning the data into typical and outlier cases:
Train a base linear model on the majority (typical) subspace of the data.
Train an incremental linear model on the minority (outlier) subspace, with factors that are additive adjustments to the base factors.
Regularizing the incremental factors to be sparse:
Use L1 regularization to encourage the incremental factors to be 0 when possible, reducing the number of new factors for users to remember.
Providing explanations incrementally:
First explain the typical cases using the base factors.
Then explain the outlier cases by showing the incremental adjustments to the base factors.
This approach aims to balance faithfulness to the AI model's predictions and memorability for users, compared to simpler global explanations or more complex local explanations.
The paper evaluates Incremental XAI against baseline Global, Subglobal, and Local explanations through modeling studies, formative user studies, and summative user studies. Key findings include:
Incremental explanations are more memorable and understandable than Subglobal explanations, while maintaining similar faithfulness.
Users appreciate the dynamic and adaptive nature of Incremental and Subglobal explanations compared to the static Global explanations.
Users find the partitioning of instances into typical and outlier cases, and the incremental factors, to be intuitive and helpful for understanding the AI system's behavior.
Stats
The paper does not provide any specific numerical data or statistics in the content. The evaluation is primarily qualitative, focusing on user perceptions and understanding of the different XAI explanation types.
Quotes
"[Incremental] would be more informed since you have the base factors and then the incremental factors on top of that."
"it's logical that the factors would change for different type of houses, ... since there might be other factors that influence the factor values for each attribute."
"the additional factors [in Incremental] are helpful for the predictions in terms of accuracy".