核心概念
提案されたDiffusion Attackは、自然なスタイルの敵対的入力を生成し、攻撃性能を維持しながら最も自然で欺瞞的な外観を保つことに成功しています。
摘要
Abstract:
- Adversarial attacks in Virtual Reality pose security threats.
- Proposed framework incorporates style transfer for natural adversarial inputs.
- Focus on naturalness and comfort of attack image appearance.
Introduction:
- Deep neural networks can threaten security by compromising applications.
- Neural style transfer used to transform content images into different styles.
- Novel technique proposed to interfere with adversarial examples effectively.
Diffusion Attack:
- Stable Diffusion model creates detailed images based on textual prompts.
- Components include encoder network, U-Net, and decoder network.
- Neural style transfer applied to create naturalistic style on content image.
Experiment and Preliminary Results:
- Two-stage training process using own style image data.
- Generated images evaluated qualitatively and quantitatively against baselines.
- Achieved higher aesthetic and quality scores compared to other methods.
Conclusion:
- Diffusion Attack successfully generates naturalistic adversarial images with competitive performance.
- Joint training of style transfer network and adversarial attack network utilized.
统计
提案されたDiffusion Attackは、自然なスタイルの敵対的入力を生成し、攻撃性能を維持しながら最も自然で欺瞞的な外観を保つことに成功しています。