The content discusses the use of explainable AI (XAI) techniques to enhance user understanding and interaction with gaze-based models in mixed reality (XR) environments. The authors developed a real-time, multi-level XAI interface for a gaze-based interaction system and evaluated it in a user study.
The key highlights and insights are:
Gaze-based interactions in XR can leverage machine learning models to achieve higher accuracy, but the black-box nature of these models makes it challenging for users to understand and adapt their gaze behavior effectively.
The authors hypothesized that XAI can serve as a bridge to help users better learn and understand AI-powered interaction systems, enabling more efficient collaboration.
They developed a temporal convolutional network (TCN) model to predict the probability of target selection and used SHAP counterfactual explanations to generate multi-level XAI visualizations.
A between-subjects user study with 32 participants revealed that the XAI condition led to a significant increase in selection accuracy (F1 score increase of 10.8%) compared to the control condition.
Participants in the XAI condition also exhibited more nuanced and controlled gaze behavior, with lower gaze velocity and higher fixation duration, suggesting they were able to better understand and adapt their gaze to the model's behavior.
Qualitative feedback from participants provided insights on their preferences for XAI explanations, including the desire for real-time, adaptive, and reinforced feedback that maps to their own gaze behavior.
The findings suggest that XAI can be a valuable tool for enhancing user understanding and collaboration with model-driven gaze-based interactions in XR environments.
To Another Language
from source content
arxiv.org
Principais Insights Extraídos De
by Mengjie Yu,D... às arxiv.org 04-23-2024
https://arxiv.org/pdf/2404.13777.pdfPerguntas Mais Profundas