Core Concepts
Autonomous vehicles rely on sophisticated hardware accelerators to power their machine vision algorithms and achieve real-time performance with reasonable power consumption.
Abstract
This comprehensive review paper examines the role of hardware accelerators in enhancing autonomous vehicle (AV) perception systems. It provides a background on the different levels of Advanced Driver-Assistance Systems (ADAS), the general structure of ADAS, and the key perception sensors used by AV manufacturers.
The paper then delves into the need for hardware accelerators to support computationally intensive machine vision algorithms in AVs. It discusses the various hardware accelerator options, including GPUs, CPUs, FPGAs, and ASICs, and their suitability for different AV applications.
The review then focuses on the machine vision algorithms used in AVs, covering object detection, lane detection, pedestrian detection, traffic sign detection, and traffic light detection. It highlights the evolution from traditional image processing algorithms to the adoption of deep learning-based models like YOLO, Faster R-CNN, and SSD, which have demonstrated superior performance.
The paper then provides an in-depth analysis of the state-of-the-art processors developed by leading companies like Tesla, NVIDIA, Qualcomm, and Mobileye, exploring their unique architectures, capabilities, and applications in AVs. It also discusses the potential of other hardware accelerators, such as FPGAs and TPUs, in addressing the computational demands of AV perception systems.
The review concludes by summarizing the key findings and implications, underscoring the critical role of hardware accelerators in enabling reliable and efficient autonomous vehicle perception and decision-making.
Stats
"Approximately 1.3 million lives are lost each year due to road traffic accidents."
"94% of these accidents are because of human errors and distracted driving."
"Tesla's FSD chip delivers 36.86 TOPS compared to the previous NVIDIA DRIVE PX 2 AI platform's 21 TOPS."
"The NVIDIA Jetson AGX Orin offers 275 TOPS with power configurable between 15W and 60W."
"The Qualcomm Snapdragon Ride SoC can deliver over 700 TOPS at 130W for L4/L5 autonomous driving."
"Xilinx's ZYNQ FPGA achieves 14 frames per watt (fps/watt) when handling CNN tasks, surpassing the Tesla K40 GPU's 4 fps/watt."
"Google's TPU v4 model can compute more than 275 teraflops (BF16 or INT8) and outperforms Nvidia A100 GPUs, demonstrating a 1.2 to 1.7 times faster speed while consuming 1.3 to 1.9 times less power."
Quotes
"AVs have garnered significant interest recently and they hold a crucial place in transportation not just for the convenience they offer in relieving drivers but also for their capacity to revolutionize the entire transportation ecosystem."
"The integration of artificial intelligence (AI) and ML is widespread in AV development, led by companies such as Waymo, Uber, and Tesla."
"Tesla's FSD chip features two independent FSD chips, each with its dedicated storage and operating system. In case of a primary chip failure, the backup unit seamlessly takes over."
"Qualcomm's advanced processors are favoured by top AV companies like Waymo, Cruise, and Argo AI for their high performance and efficiency."