Privacy-preserving Fine-tuning of Large Language Models through Flatness: Balancing Privacy and Performance
The author explores the trade-off between privacy and generalization in Large Language Models (LLMs) by enhancing weight flatness through a holistic framework. The proposed methods improve model performance with competitive privacy preservation.