The paper focuses on the security of behavioral-based driver authentication systems in vehicles. It first proposes a realistic system model and threat model that reflect the practical implementation and deployment of such systems in real-world vehicles.
The authors then develop two new lightweight behavioral-based driver authentication and identification systems using Machine Learning (ML) and Deep Learning (DL) architectures. These systems are designed to be efficient and compatible with the constraints of commercial vehicle networks.
To assess the security of these systems, the authors introduce GAN-CAN, a novel class of evasion attacks that can fool state-of-the-art models with a perfect attack success rate. GAN-CAN attacks leverage Generative Adversarial Networks (GANs) to generate realistic data that can bypass the authentication systems. The attacks are evaluated under different assumptions on the attacker's knowledge, from white-box to black-box scenarios.
The evaluation shows that the proposed GAN-CAN attacks can steal a vehicle in less than 22 minutes, regardless of the underlying authentication system. The authors also provide a comprehensive comparison of their systems and attacks with the state-of-the-art, highlighting the significant security vulnerabilities in existing behavioral-based driver authentication approaches.
Finally, the paper concludes by providing a list of security requirements and implementation suggestions to aid practitioners in the safe and secure deployment of behavioral-based driver authentication systems in real-world vehicles.
Naar een andere taal
vanuit de broninhoud
arxiv.org
Belangrijkste Inzichten Gedestilleerd Uit
by Emad Efatina... om arxiv.org 04-19-2024
https://arxiv.org/pdf/2306.05923.pdfDiepere vragen