Core Concepts
This work proposes a hybrid approach combining Split Learning (SL) and Function Secret Sharing (FSS) to ensure client data privacy and security against feature-space hijacking attacks in SL.
Abstract
The paper introduces a privacy-preserving machine learning protocol that combines SL and FSS to address the privacy leakage and security issues in SL. The key highlights are:
The authors identify two major attacks in SL - Feature-Space Hijacking Attack (FSHA) and Visual Invertibility Inference Attack (VIIA). FSHA allows a malicious server to hijack the model and steer it towards a specific target feature space, while VIIA enables the server to infer sensitive information from the activation maps.
To mitigate these attacks, the authors propose a hybrid approach that uses FSS on the server-side. The client adds a random mask to the activation map before sending it to the servers. The servers cannot access the original function but instead work with shares generated using FSS. This prevents the server from reconstructing the client's raw data from the activation map.
The authors demonstrate that their approach is more efficient than prior work that used Homomorphic Encryption, in terms of both communication overhead and training time. Their protocol yields promising results, reducing communication overhead by over 2× and training time by over 7× compared to the same model with FSS without SL.
The authors show that their approach achieves > 96% accuracy on the MNIST dataset, which is equivalent to the plaintext models, while providing strong privacy guarantees.