Core Concepts
The integration of artificial intelligence (AI) and machine learning (ML) in healthcare introduces significant security and privacy risks, exposing sensitive medical data and system integrity to potential cyberattacks.
Abstract
The paper explores the security and privacy threats posed by AI/ML applications in healthcare. Through a thorough examination of existing research across a range of medical domains, the authors have identified significant gaps in understanding the adversarial attacks targeting medical AI systems.
The paper begins by providing a taxonomy that analyzes how the identities, knowledge, capabilities, and goals of adversaries in the healthcare domain may differ from those considered in traditional AI attack threat models. It then conducts a comprehensive systematization of the current state of research on security and privacy attacks on medical AI, categorizing these works according to different threat models specific to the medical settings.
The authors point out possible directions and challenges for future research in the medical domains where AI has been increasingly deployed and shown promising results. To verify their hypothesis, they conducted five adversarial attacks in diverse under-explored medical domains as proof of concept to validate the reasoning of their insights.
The key findings include:
Adversaries in the healthcare domain can have diverse identities, including patients, medical practitioners, service providers, and business competitors, each with different motivations and capabilities.
Existing research has explored various integrity attacks, such as evasion, poisoning, and backdoor attacks, as well as confidentiality attacks like membership inference and model inversion on medical AI systems.
However, there are several under-explored attack vectors, such as availability attacks, fairness attacks, and explainability attacks, which warrant further investigation.
The authors conducted case studies on membership inference, backdoor, and poisoning attacks in ECG diagnostics, disease risk prediction, medical image segmentation, and EHR diagnostics, demonstrating the feasibility and effectiveness of these attacks in diverse medical domains.
Stats
The paper does not provide specific numerical data or metrics to support the key insights. However, it does include several tables that summarize the existing attacks in different medical domains.
Quotes
"The integration of AI/ML technologies into medical systems inevitably introduces vulnerabilities."
"Recognizing the lack of a holistic view of AI attack research in the medical landscape, we aim to fill this gap by systematically examining the medical application domains and laying the groundwork for future attack research."
"Through our analysis of different threat models and feasibility studies on adversarial attacks in different medical domains, we provide compelling insights into the pressing need for cybersecurity research in the rapidly evolving field of AI healthcare technology."