Adding noise to the activations (features) in the final layers of deep neural networks can limit the quality of possible reconstructions of the input data. The Hammersley-Chapman-Robbins (HCR) bounds provide a principled way to quantify the confidentiality arising from such added noise.
Large language models can improve their performance by querying more capable remote models, but this poses a significant privacy risk if the local model has access to sensitive data. This work introduces privacy-preserving techniques that allow local models to leverage remote models without revealing private information.