The author argues that well-trained models need protection from unauthorized access, leading to the development of the MAP framework based on the Inverse Transfer Parameter Hypothesis.
Deep learning models can be protected from unauthorized usage through the innovative MAsk Pruning (MAP) framework, which minimizes performance degradation on unauthorized data while maintaining source domain performance.
Deep learning models can be safeguarded through the innovative MAsk Pruning (MAP) framework, ensuring protection of intellectual property in source-free scenarios.
Die MAP-Methode bietet eine effektive Lösung für den Schutz des geistigen Eigentums von Modellen in verschiedenen Szenarien.
본 논문에서는 사전 훈련된 심층 신경망(DNN)의 지적 재산(IP)을 보호하기 위해 권한 없는 데이터 도메인으로의 모델 전이 가능성을 제한하는 새로운 방법인 전이 불가능한 가지치기(NTP)를 제안합니다.