The CypherTalk framework addresses the need for privacy protection in LLMs while maintaining high model performance. It introduces shaking operators for privacy-preserving fine-tuning and inference, showcasing its effectiveness over existing methods.
Large Language Models (LLMs) are gaining popularity, but concerns about privacy and security remain significant challenges. CypherTalk offers a solution by introducing a cost-effective framework that balances privacy preservation with model utility. By employing shaking operators, users can achieve reliable accuracy while protecting sensitive data in cloud platforms.
Recent research efforts have focused on privacy-protected fine-tuning solutions for LLMs, categorizing into crypto-based methods like Homomorphic Encryption (HE) and Secure Multi-party Computation (MPC), as well as Differential Privacy (DP) methods. However, these approaches often face trade-offs between privacy preservation and model accuracy.
CypherTalk's innovative approach involves key generation, implantation, private tuning, and inference processes to ensure data privacy while maintaining high model performance. The framework demonstrates superior performance compared to state-of-the-art baselines in terms of accuracy and cost-effectiveness.
Til et andet sprog
fra kildeindhold
arxiv.org
Vigtigste indsigter udtrukket fra
by Zhiyu Chen,Y... kl. arxiv.org 03-13-2024
https://arxiv.org/pdf/2403.07283.pdfDybere Forespørgsler