Concepts de base
The author explores the gap between theoretical advancements and real-world applications in Privacy-Preserving Machine Learning, focusing on Homomorphic Encryption and Secure Multi-party Computation.
Résumé
The content delves into the challenges of implementing Privacy-Preserving Techniques in Machine Learning, specifically Homomorphic Encryption and Secure Multi-party Computation. It discusses the importance of maintaining data privacy while training ML models and highlights the limitations faced by researchers in reproducing results due to lack of open-source implementations.
Stats
Recent advances in Privacy-Preserving Techniques (PPTs) have enabled ML training over protected data through Privacy-Preserving Machine Learning (PPML).
The main techniques used for PPML are Homomorphic Encryption (HE), Secure Multi-party Computation (SMPC), Federated Learning (FL), Differential Privacy (DP), and Functional Encryption (FE).
HE schemes allow computations on encrypted data without decrypting it, enabling secure cloud services for sensitive information like medical records.
SMPC enables multiple parties to compute functions on private data while preserving privacy, with applications ranging from private DNA comparison to secure wage analysis.
PPML frameworks aim to bridge the gap between theoretical advancements and real-world applications by prioritizing open-source availability for reproducibility.
Citations
"We provide a solid theoretical background that eases the understanding of current approaches and their limitations."
"Our work serves as a valuable contribution by raising awareness about the current gap between theoretical advancements and real-world applications in PPML."
"Recent advances in Privacy-Preserving Techniques have enabled ML training over protected data through Privacy-Preserving Machine Learning."