Evaluating the Vulnerability of Image Classification Models to Adversarial Attacks: A Comparative Analysis of FGSM, Carlini-Wagner, and the Effectiveness of Defensive Distillation
Deep neural networks used for image classification are vulnerable to adversarial attacks, which involve subtle manipulations of input data to cause misclassification. This study investigates the impact of FGSM and Carlini-Wagner attacks on three pre-trained CNN models, and examines the effectiveness of defensive distillation as a countermeasure.