Grunnleggende konsepter
Bridging the gap between academic threat models and practical AI security by studying real-world threat models.
Sammendrag
The content discusses the disparity between academic threat models in AI security and practical usage. It focuses on the need to study more practical threat models by analyzing the six most studied attacks in AI security research. The article highlights mismatches between research and practice, emphasizing the importance of understanding how AI is used in practice. It provides insights into the common dataset sizes, challenges in AI security research, and factors influencing access to AI systems.
Structure:
- Introduction
- Identifies the gap between academic research and practical AI security.
- Background
- Defines AI and different paradigms like ML, RL, and DM.
- Methodology
- Describes the questionnaire design, pretests, and recruiting process.
- Results
- Analyzes training-time attacks (poisoning, backdoors), test-time attacks (evasion, model stealing), and privacy attacks (membership inference, attribute inference).
- AI Security Beyond Specific Attacks
- Discusses common dataset sizes, challenges in AI security research, and factors influencing access to AI systems.
Statistikk
Recent works have identified a gap between research and practice in artificial intelligence security.
A survey with 271 industrial practitioners was conducted to match threat models to AI usage in practice.
71.6% of participants reported that training data was not accessible.
48.1% of participants stated to use third-party models and then fine-tune them.
44.5% of participants reported allowing public access to model outputs.
35.3% of participants reported allowing queries to the model.
4.8% of participants granted 3rd party access to their weights.
Sitater
"Our paper is thus a call for action to study more practical threat models in artificial intelligence security."
"The underlying problem, an absence of knowledge on how artificial intelligence (AI) is used in practice, is however still unaddressed."
"Most academic attacks require access to more data, limiting their usefulness."
"Our work is a call for transferability studies, when neither model nor data are known."