The content discusses the challenges of protecting model ownership in personalized federated learning (PFL) and introduces RobWE, a watermark embedding scheme. It addresses conflicts over private watermarks, malicious tampering, and proposes a detection mechanism. Experimental results show the superiority of RobWE in fidelity, reliability, and robustness compared to existing schemes.
The paper also covers related work on watermarking in centralized and federated learning scenarios, highlighting the importance of ownership protection for AI models. It delves into the problem statement regarding tampering attacks and defines tasks for achieving PFL goals and watermark embedding.
Furthermore, the proposed scheme is detailed with steps for system setup, watermark decoupled embedding, representation training, and tampered watermark detection. The experiments conducted evaluate RobWE's performance in terms of fidelity, reliability, and robustness under various scenarios like Non-IID data settings.
The results demonstrate that RobWE outperforms FedIPR in maintaining model accuracy while embedding watermarks. It shows high reliability with improved detection rates for private watermarks. Additionally, it exhibits robustness against pruning attacks, fine-tuning attacks, and adaptive tampering attacks through dedicated defense mechanisms.
Overall, the paper provides a comprehensive analysis of RobWE's effectiveness in protecting personalized model ownership in PFL through innovative watermark embedding techniques and robust detection mechanisms.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문