The paper investigates the role of explainable AI (XAI) and human-machine interfaces (HMIs) in enhancing trust and situation awareness in autonomous vehicles. It explores a systematic "3W1H" approach (what, whom, when, how) to determine the best practices for conveying explanatory information to various stakeholders, including passengers, human drivers, pedestrians, and other road users.
The key insights are:
What to explain: Explanations should cover the autonomous vehicle's decisions, traffic scenes, and events to improve user understanding.
Whom to explain to: Explanations should be tailored for passengers, human drivers, people with cognitive/physical impairments, remote operators, bystanders, cyclists, traffic enforcement officials, and emergency responders.
When to explain: Explanations should be provided during critical and emergent situations, takeover scenarios, and before an action is performed.
How to explain: Explanations can be delivered through audio, visual, vibrotactile, text, heads-up displays, passenger interfaces, haptic feedback, and braille interfaces, considering the diverse needs of users.
The paper then presents a situation awareness framework that integrates XAI and HMI to enable interactive dialogues between users and autonomous vehicles. This framework aims to provide descriptive, reactive, and inquisitive explanations to improve user perception, comprehension, and projection of the vehicle's behavior.
The authors conduct an experiment using a visual question-answering model to validate the framework and perform a user study to assess the impact of incorrect explanations on users' perceived safety and comfort with autonomous driving. The results highlight the importance of providing faithful and robust explanations to foster trust and acceptance of autonomous vehicle technology.
Başka Bir Dile
kaynak içeriğinden
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Shahin Ataki... : arxiv.org 04-12-2024
https://arxiv.org/pdf/2404.07383.pdfDaha Derin Sorular