DNNShield addresses the vulnerability of DNNs to theft and unauthorized use by embedding unique identifiers within the model architecture. The protection layers ensure secure training and deployment while maintaining high resilience against attacks. By eliminating the reliance on secret keys, developers can claim ownership multiple times with minimal impact on performance.
The surge in popularity of machine learning has led to significant investments in training Deep Neural Networks (DNNs). However, these models are valuable assets that require intellectual property protection measures. Existing methods like Watermarking and Passporting have limitations that DNNShield aims to overcome.
Watermarking involves embedding markers within the model during training, but it requires key secrecy for ownership verification. Passporting adds extra layers to entangle model parameters but has drawbacks like additional learning overhead and vulnerability to attacks. DNNShield offers a novel approach by integrating untrainable protection layers into the model architecture without introducing additional trainable parameters.
The proposed method empowers developers to protect their DNNs and intellectual property rights efficiently. By incorporating unique identifiers into the model architecture, DNNShield ensures ownership verification without relying on secrecy or compromising performance.
לשפה אחרת
מתוכן המקור
arxiv.org
שאלות מעמיקות