Aharoni, E., Baruch, M., Bose, P., Buyuktosunoglu, A., Drucker, N., Pal, S., Pelleg, T., Sarpatwar, K., Shaul, H., Soceanu, O., & Vaculin, R. (2024). Efficient Pruning for Machine Learning under Homomorphic Encryption. In ESORICS 2023: European Symposium on Research in Computer Security (pp. 11-11). Springer. https://doi.org/10.1007/978-3-031-51482-1_11
This research paper aims to address the challenge of high latency and memory requirements associated with homomorphic encryption (HE) in privacy-preserving machine learning (PPML) by introducing a novel pruning framework called HE-PEx.
The researchers developed HE-PEx, a framework that combines four main primitives: prune, permute, pack, and expand. They implemented various pruning schemes based on these primitives, including a novel co-permutation algorithm to enhance tile sparsity without compromising accuracy. The team evaluated HE-PEx on four PPML networks (MLPs, CNNs, and autoencoders) trained on MNIST, CIFAR-10, SVHN, and COVIDx CT-2A datasets. They compared their methods with existing techniques, including an adaptation of the Hunter scheme, using metrics like tile sparsity, inference accuracy/loss, latency, and memory requirements.
HE-PEx significantly reduces the computational overhead of PPML inference under HE. The proposed techniques achieved tile sparsities of up to 95% (average 61%) across different datasets and network architectures, with a minimal degradation in inference accuracy/loss (within 2.5%). Compared to the state-of-the-art pruning technique, HE-PEx generated networks with 70% fewer ciphertexts on average for the same degradation limit. This sparsity translated to a 10–35% improvement in inference speed and a 17–35% reduction in memory requirements compared to unpruned models in a privacy-preserving image denoising application.
HE-PEx offers a practical solution for deploying efficient and privacy-preserving machine learning models using HE. The framework's ability to significantly reduce computational overhead without compromising accuracy makes it a valuable tool for real-world PPML applications.
This research significantly contributes to the field of PPML by addressing a major obstacle to the wider adoption of HE: its computational inefficiency. HE-PEx opens up new possibilities for deploying complex machine learning models in privacy-sensitive domains like healthcare and finance.
While HE-PEx demonstrates significant improvements, the authors acknowledge that exploring different pruning thresholds for specific layers could further enhance performance. Future research could investigate the application of hyperparameter search techniques to optimize these thresholds. Additionally, further investigation into the privacy implications of pruned models in HE would be beneficial.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Ehud Aharoni... at arxiv.org 11-05-2024
https://arxiv.org/pdf/2207.03384.pdfDeeper Inquiries