대형 언어 모델(LLM)을 엣지 디바이스에 적용하여 주행 행동을 실시간으로 서술하고 추론할 수 있는 효율적인 프레임워크를 제안한다.
GRANP, a novel model combining Graph Attention Networks, LSTM, and Recurrent Attentive Neural Processes, can efficiently capture spatial-temporal relationships and quantify prediction uncertainties for vehicle trajectory forecasting.
LLaDA, a simple yet powerful tool, enables human drivers and autonomous vehicles to adapt their driving behavior to traffic rules in new locations by leveraging the zero-shot generalizability of large language models.
Passengers prefer a more passive lateral driving style for autonomous vehicles on rural roads, especially under adverse weather conditions and in the presence of oncoming traffic.
Action recognition models can efficiently extract spatial and temporal clues from video data to accurately classify and predict lane change events of surrounding vehicles in autonomous driving scenarios.
The authors propose a novel Sparse Query-Centric paradigm for end-to-end Autonomous Driving (SparseAD), where the sparse queries completely represent the whole driving scenario across space, time and tasks without any dense BEV representation, enabling efficient extension to more modalities and tasks.
AGENTSCODRIVER is a novel framework that leverages large language models to enable multiple vehicles to conduct collaborative driving with the capabilities of lifelong learning, reasoning, communication, and reflection.
UniPAD is a novel self-supervised learning paradigm that leverages 3D differentiable rendering to effectively learn continuous 3D representations, enabling seamless integration into both 2D and 3D frameworks for autonomous driving tasks.
Combining basic driving imitation learning with Large Language Models (LLMs) based on multi-modality prompt tokens to enhance end-to-end autonomous driving performance.
A novel trajectory prediction framework called Partial Observations Prediction (POP) that employs self-supervised learning and feature distillation techniques to provide stable and accurate predictions even with limited observations.