Core Concepts
Counterfactual explanations can enhance the explainability of deep learning-based traffic forecasting models by revealing how alterations in input variables affect predicted traffic speed outcomes.
Abstract
The study proposes a framework to generate counterfactual explanations for deep learning-based traffic forecasting models. The key insights are:
-
Incorporating contextual features such as number of POIs, number of lanes, and speed limits can modestly improve the performance of traffic forecasting models compared to using only historical traffic data.
-
Counterfactual explanations reveal that the impact of contextual features on traffic speed prediction varies across different road types (suburban, urban, highway).
- For suburban roads, increasing the number of nearby POIs is associated with higher predicted speeds.
- For urban roads, reducing the number of POIs is suggested to mitigate traffic congestion.
- For highways, altering the static contextual features has little impact on predicted speeds.
-
The framework allows incorporating user-defined constraints, such as directional constraints (increase/decrease specific features) and weighting constraints (prioritize certain features), to generate tailored counterfactual explanations for specific use cases.
-
The scenario-driven counterfactual explanations can benefit both machine learning practitioners to understand model behavior and domain experts to gain insights for real-world traffic management applications.
Stats
The model achieved an RMSE of 9.7578, MAE of 6.4914, Accuracy of 85.12%, R2 of 0.7931, and Explained Variance of 0.7940 on the test data.
Quotes
"Counterfactual explanations reveal the minimal changes required in the original input features to alter the model's prediction, thus providing understanding without sacrificing fidelity or complexity."
"Classifying contextual data into spatial and temporal contextual features, [29] proposed a multimodal context-based graph convolutional neural network (MCGCN) to embed spatial and temporal contexts and incorporate them into traffic speed prediction for better performance."
"CFEs are straightforward to understand and can be used to provide users with a course of action to alter the prediction if they receive unfavourable decisions. These explanations establish a relationship between the input features and the decision, making them highly valuable for users to comprehend, interact with, and utilize these models."