Lungaa, L., & Sreeharib, S. (2024). Undermining Image and Text Classification Algorithms Using Adversarial Attacks. Electronic Imaging Conference 2025. arXiv:2411.03348v1 [cs.CR].
This research paper investigates the vulnerability of machine learning models, specifically text and image classifiers, to adversarial attacks using techniques like Generative Adversarial Networks (GANs), Synthetic Minority Oversampling Technique (SMOTE), Fast Gradient Sign Method (FGSM), and Gradient-weighted Class Activation Mapping (GradCAM).
The researchers trained three machine learning models (Decision Tree, Random Forest, and XGBoost) on a financial fraud dataset and a Convolutional Neural Network (CNN) on the Olivetti Faces Dataset. They then generated adversarial examples using GANs and SMOTE for the text classifiers and FGSM with GradCAM for the facial recognition model. The performance of the models was evaluated before and after the attacks by comparing accuracy, AUC, recall, and precision.
The adversarial attacks significantly impacted the performance of all tested models. The text classification models experienced a 20% decrease in accuracy, while the facial recognition model's accuracy dropped by 30%. This highlights the vulnerability of both text and image classifiers to adversarial manipulation.
The study concludes that machine learning models, even those with high initial accuracy, are susceptible to adversarial attacks, raising concerns about their reliability in real-world applications like fraud detection and biometric security. The authors emphasize the urgent need for robust defense mechanisms to counter these vulnerabilities.
This research contributes to the growing body of knowledge on adversarial machine learning, demonstrating the effectiveness of various attack techniques and emphasizing the need for improved security measures in machine learning systems.
The study focuses on specific attack and defense techniques, and further research is needed to explore other methods and their effectiveness. Additionally, investigating the transferability of adversarial examples across different models and datasets is crucial for developing robust defenses.
Іншою мовою
із вихідного контенту
arxiv.org
Ключові висновки, отримані з
by Langalibalel... о arxiv.org 11-07-2024
https://arxiv.org/pdf/2411.03348.pdfГлибші Запити