This survey provides a thorough overview of the current research landscape on physical adversarial attacks against LiDAR-based perception systems in autonomous vehicles. It begins by introducing the role of LiDAR in autonomous driving and the increasing use of deep learning-based perception models.
The survey then presents a detailed taxonomy of adversarial attacks, categorizing them based on the attack model (digital vs. physical), testing scenario (simulation vs. real-world), attacker knowledge (white-box, black-box, grey-box), and attack goals (object injection, object removal, translation, miscategorization).
The different attack methods are discussed, including spoofing attacks, physical adversarial objects, and the use of reflective materials. The survey examines how attackers can exploit system-level and environmental-level knowledge to manipulate LiDAR perception, highlighting the challenges and physical constraints involved in executing these attacks.
The evaluation metrics used to assess the effectiveness of adversarial attacks and the robustness of perception systems are reviewed, covering metrics like Recall-IOU curves, 3D Average Precision, and others. The survey also discusses the use of various datasets, simulators, and autonomous driving platforms in current research.
Lastly, the survey covers defense mechanisms designed to mitigate these attacks, including both model-agnostic and model-based approaches. It concludes by identifying open research challenges and proposing future research directions to enhance the security and reliability of LiDAR-based perception systems in autonomous vehicles.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Amira Guesmi... at arxiv.org 10-01-2024
https://arxiv.org/pdf/2409.20426.pdfDeeper Inquiries