Core Concepts
A Bayesian Neural Network-based controller that can quantify uncertainty in its predictions to enable safe and reliable autonomous vehicle lateral control, even in unfamiliar driving environments.
Abstract
The paper presents the development of a vehicle's lateral control system using a Bayesian Neural Network (BNN), a probabilistic machine learning model that can quantify uncertainty in its predictions. The key highlights are:
The BNN-based controller is trained using simulated data from the TORCS racing simulator, where the vehicle traverses a single track while being controlled by a tuned PID controller. The dataset consists of LIDAR sensor measurements and corresponding steering values.
The trained BNN model demonstrates the ability to adapt and effectively control the vehicle on multiple similar tracks, showcasing its generalization capabilities.
The quantification of prediction confidence integrated into the BNN controller serves as an early-warning system, signaling when the algorithm lacks confidence in its predictions and is susceptible to failure. By establishing a confidence threshold, the system can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
When deployed on more complex tracks with hairpin turns and twisted sections, the uncertainty estimation helped the system to safely navigate by taking manual control when the uncertainty was beyond the threshold, even on unseen terrains.
The authors conclude that the BNN-based controller's ability to quantify uncertainty is a crucial capability for the secure functioning of Cyber-Physical Systems, such as autonomous vehicles, where safety is of paramount importance.
Stats
The dataset consists of LIDAR sensor measurements and corresponding steering values collected from the TORCS racing simulator.
Quotes
"The quantification of prediction confidence integrated into the controller serves as an early-warning system, signaling when the algorithm lacks confidence in its predictions and is therefore susceptible to failure."
"By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters."