The paper explores the practicality of adversarial evasion attacks against machine learning-based network intrusion detection systems (ML-NIDS). It makes three key contributions:
Identifying numerous practicality issues for evasion adversarial attacks on ML-NIDS using an attack tree threat model. The attack tree highlights leaf nodes with questionable feasibility, indicating the significant challenges attackers face in executing these attacks in real-world scenarios.
Introducing a taxonomy of practicality issues associated with adversarial attacks against ML-based NIDS, including challenges related to attackers' knowledge, attack space, and the dynamic nature of ML models.
Investigating the impact of continuous retraining on the effectiveness of adversarial attacks against NIDS. The experiments show that continuous retraining, even without adversarial training, can significantly reduce the impact of FGSM, PGD, and BIM adversarial attacks on the accuracy, precision, recall, and F1-score of ANN, SVM, and CNN-based NIDS models.
The results suggest that the dynamic nature of ML models can introduce an additional hurdle for attackers, as they would constantly need to obtain the updated gradients of the model, which is a complex task, especially in the NIDS domain. The recovery of the model's performance metrics occurred after just one or two retraining sessions, demonstrating the effectiveness of continuous training in mitigating the impact of adversarial attacks.
לשפה אחרת
מתוכן המקור
arxiv.org
תובנות מפתח מזוקקות מ:
by Mohamed el S... ב- arxiv.org 04-05-2024
https://arxiv.org/pdf/2306.05494.pdfשאלות מעמיקות