Privacy-Preserving Techniques for Prompt Engineering in Large Language Models
Prompting large language models with sensitive data poses significant privacy risks. This survey systematically reviews various techniques, including sanitization, obfuscation, encryption, and differential privacy, to mitigate these privacy concerns during prompting.