The authors show that hybrid quantum classifiers based on quantum kernel methods and support vector machines (SVMs) are vulnerable to adversarial attacks, where small engineered perturbations of the input data can cause the classifier to predict the wrong result.
They first provide a mathematical introduction to adversarial machine learning and quantum-enhanced SVMs. The authors then develop a technique to generate adversarial samples that deceive the QSVM classifier by shifting the feature vectors to the opposite side of the decision hyperplane.
The authors demonstrate the vulnerability of QSVM classifiers through numerical simulations on a medical image dataset. They show that the adversarial samples can be generated efficiently, especially when the quantum embedding circuit leads to a concentration of kernel values.
To mitigate the effect of adversarial attacks, the authors propose a simple defense strategy based on data augmentation with a few crafted adversarial samples. They show that this adversarial training approach can significantly improve the robustness of the QSVM classifier against new attacks.
Finally, the authors present a proof-of-principle experiment on a real quantum hardware, which suggests that adversarial training can also help make quantum kernel methods more robust against hardware noise.
Ke Bahasa Lain
dari konten sumber
arxiv.org
Wawasan Utama Disaring Dari
by Giuseppe Mon... pada arxiv.org 04-10-2024
https://arxiv.org/pdf/2404.05824.pdfPertanyaan yang Lebih Dalam