The study investigates how inclusive design approaches, specifically focusing on users' diverse problem-solving styles, can improve users' mental models of Explainable AI (XAI) systems. The researchers conducted a between-subject study with 69 participants, where one group used an "Original" version of an XAI prototype and the other group used a "Post-GenderMag" version that incorporated inclusivity fixes based on the GenderMag method.
The key findings are:
Explanation usage had a significant positive impact on participants' mental model scores, indicating that using the explanations more frequently led to better understanding of the AI agents.
The Post-GenderMag group was on average 23% more engaged with the explanations compared to the Original group, suggesting that the inclusivity fixes improved users' engagement with the explanations.
The Post-GenderMag group had significantly better mental model scores than the Original group, demonstrating that the inclusivity fixes led to improved mental models of the AI agents.
The researchers analyzed the differences in mental model scores to identify specific inclusivity fixes that contributed to the significant improvement. For example, the addition of an interactive legend in the "Scores Best-to-Worst" (BTW) explanation helped users with low computer self-efficacy and risk-averse attitudes to better differentiate between the data series, leading to increased engagement and better mental models.
Overall, the study highlights the importance of considering users' diverse problem-solving styles when designing XAI systems to promote better understanding and mental models among all users.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Md Montaser ... at arxiv.org 04-23-2024
https://arxiv.org/pdf/2404.13217.pdfDeeper Inquiries