Основные понятия
DNNShield introduces protection layers to secure DNNs without relying on secrecy, ensuring ownership verification and model integrity.
Аннотация
The surge in popularity of machine learning has led to the vulnerability of Deep Neural Networks (DNNs) to theft and unauthorized use. DNNShield offers a novel approach by embedding unique identifiers within the model architecture using specialized protection layers. These layers ensure secure training and deployment while offering high resilience against attacks. The method eliminates the need for secret keys, allowing repeated public ownership claims with minimal impact on performance. Extensive evaluations across datasets and model architectures validate the effectiveness and efficiency of DNNShield.
I. INTRODUCTION
ML advancements drive significant progress in various fields.
Larger DNN models require extensive resources for training.
IP protection methods like Watermarking and Passporting are used.
II. BACKGROUND
Matrices represent data in ML models.
Understanding FC and Convolutional layers is crucial.
Cosine similarity measures vector similarity.
III. REQUIREMENT ANALYSIS
Novel ownership verification method needed for ML models.
Scenario involves data owner, proprietary model, and misuse by third-party.
Threat model includes white-box access and adaptive adversaries.
IV. DNNSHIELD DESIGN
DNNShield integrates publicly known protection layers into model architecture.
Hadamard and Permutation layers alter data flow for ownership verification.
V. SECURITY ANALYSIS
Attacks like merge, convolutional pattern modification evaluated.
Protection layer split analyzed for feasibility.
VI. EVALUATION
Functionality of DNNShield demonstrated with Hadamard and Permutation layers.
Position and amount of protection layers tested for impact on performance.
Model refinement post-protection showcased with fine-tuning on new datasets.
Статистика
この論文は、モデルのアーキテクチャに保護層を組み込むことで、モデルの所有権確認とモデルの整合性を確保する方法を提供しています。