Khái niệm cốt lõi
A modular autonomous driving architecture with sensing, localization, perception, tracking/prediction, and planning/control components that achieved the top rank in the 2023 CARLA Autonomous Driving Leaderboard 2.0 Challenge.
Tóm tắt
The paper presents the architecture of the Kyber-E2E solution that secured the top rank in the 2023 CARLA Autonomous Driving (AD) Leaderboard 2.0 Challenge. The solution employs a modular approach with five main components: sensing, localization, perception, tracking/prediction, and planning/control.
The perception module utilizes state-of-the-art language-assisted vision models for object detection and traffic sign recognition. The tracking and prediction module integrates the Unscented Kalman Filter and an unbalanced linear-sum assignment to effectively track and predict object trajectories. For motion planning, the authors employ Inverse Reinforcement Learning over the InD open-source dataset to optimize the planner's parameters.
The authors provide insights into their design choices and trade-offs, and analyze the impact of each component on the overall performance. The experiments demonstrate the effectiveness of the modular approach, where components trained on different datasets can still yield reasonably good performance on the challenging Leaderboard 2.0 scenarios.
The key limitations include the dependence of the planner on accurate perception, especially in highly-crowded scenes, and the need for high-range information for lane change maneuvers into oncoming traffic. The authors plan to address these challenges in future work by implementing a fully end-to-end autonomous driving architecture.
Thống kê
The paper reports the following key metrics:
Driving Score (DS): 3.109
Route Completion (RC): 5.285
Infraction Penalty (IS): 0.669
Trích dẫn
"Our solution leverages state-of-the-art language-assisted perception models to help our planner perform more reliably in highly challenging traffic scenarios."
"We use open-source driving datasets in conjunction with Inverse Reinforcement Learning (IRL) to enhance the performance of our motion planner."