This comprehensive survey explores the design, development, and evaluation of explainable interfaces (EIs) within explainable artificial intelligence (XAI)-enhanced human-autonomy teaming (HAT) systems.
The paper first clarifies the distinctions between key concepts like EI, explanations, and model explainability, providing researchers and practitioners with a structured understanding. It then contributes a novel framework for EI, addressing the unique challenges in HAT.
The survey organizes model explainability enhancement methods across pre-modeling, model design, and post-modeling stages, with a special focus on the ongoing research on large language models (LLMs) for explanation generation.
An evaluation framework for EI-HAT is proposed, encompassing model performance, human-centered factors, and group task objectives. The paper also highlights the challenges and future directions in integrating EI for HAT, and provides insights into the evolution and adaptability of EI across healthcare, transportation, and digital twin applications.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Xiangqi Kong... klo arxiv.org 05-07-2024
https://arxiv.org/pdf/2405.02583.pdfSyvällisempiä Kysymyksiä