In the realm of automated driving, establishing trust in AI systems necessitates explainability for decisions. The Qualitative Explainable Graph (QXG) offers a unified symbolic and qualitative representation for scene understanding in urban mobility. By leveraging spatio-temporal graphs and qualitative constraints, the QXG interprets an automated vehicle's environment using sensor data and machine learning models. This approach enables real-time construction of intelligible scene models across various sensor types, facilitating in-vehicle explanations and decision-making. The research highlights the transformative potential of QXG in elucidating decision rationales in automated driving scenarios, catering to diverse needs such as informing passengers, alerting vulnerable road users, and analyzing past behaviors.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問