Enabling Large Language Models to Collaborate and Learn from Each Other While Preserving Privacy
Large language models can improve their performance by querying more capable remote models, but this poses a significant privacy risk if the local model has access to sensitive data. This work introduces privacy-preserving techniques that allow local models to leverage remote models without revealing private information.