toplogo
Sign In

Securing Split Learning Against Feature-Space Hijacking and Visual Invertibility Attacks


Core Concepts
This work proposes a hybrid approach combining Split Learning (SL) and Function Secret Sharing (FSS) to ensure client data privacy and security against feature-space hijacking attacks in SL.
Abstract
The paper introduces a privacy-preserving machine learning protocol that combines SL and FSS to address the privacy leakage and security issues in SL. The key highlights are: The authors identify two major attacks in SL - Feature-Space Hijacking Attack (FSHA) and Visual Invertibility Inference Attack (VIIA). FSHA allows a malicious server to hijack the model and steer it towards a specific target feature space, while VIIA enables the server to infer sensitive information from the activation maps. To mitigate these attacks, the authors propose a hybrid approach that uses FSS on the server-side. The client adds a random mask to the activation map before sending it to the servers. The servers cannot access the original function but instead work with shares generated using FSS. This prevents the server from reconstructing the client's raw data from the activation map. The authors demonstrate that their approach is more efficient than prior work that used Homomorphic Encryption, in terms of both communication overhead and training time. Their protocol yields promising results, reducing communication overhead by over 2× and training time by over 7× compared to the same model with FSS without SL. The authors show that their approach achieves > 96% accuracy on the MNIST dataset, which is equivalent to the plaintext models, while providing strong privacy guarantees.
Stats
None
Quotes
None

Deeper Inquiries

How can the proposed approach be extended to handle more complex neural network architectures beyond the MNIST dataset

To extend the proposed approach to handle more complex neural network architectures beyond the MNIST dataset, several considerations need to be taken into account. Firstly, the FSS-based Split Learning (SL) protocol can be adapted to accommodate deeper neural network architectures with multiple layers. This would involve splitting the model at a suitable layer, ensuring that the client handles the initial layers while the server manages the remaining layers. Additionally, the FSS protocol can be optimized to efficiently handle the increased computational complexity of larger models. Moreover, for more complex datasets with higher-dimensional inputs, such as image datasets with larger resolutions or multi-channel inputs, the protocol may need to be modified to handle the increased data complexity. This could involve adjusting the input processing steps, such as resizing or normalizing the input data, to suit the requirements of the specific dataset. Furthermore, the training process may need to be optimized to handle the larger dataset sizes and longer training times associated with more complex architectures. Overall, by carefully adapting the FSS-based SL protocol and optimizing it for more complex neural network architectures and datasets, the proposed approach can be effectively extended to handle a wider range of machine learning tasks beyond the MNIST dataset.

What are the potential limitations or trade-offs of using FSS compared to other privacy-preserving techniques like Differential Privacy or Homomorphic Encryption

While Function Secret Sharing (FSS) offers several advantages for privacy-preserving machine learning, it also has potential limitations and trade-offs compared to other techniques like Differential Privacy (DP) or Homomorphic Encryption (HE). One limitation of FSS is that it may introduce additional computational overhead compared to simpler encryption techniques like DP. FSS involves splitting the function into shares and performing computations on these shares, which can be more computationally intensive than other methods. This increased computational complexity can impact the training time and overall efficiency of the machine learning model. Another trade-off of using FSS is the potential for information leakage during the sharing and computation of function shares. While FSS provides a secure way to perform computations on private data, there is still a risk of privacy leakage if the protocol is not implemented correctly or if malicious actors gain access to the function shares. Additionally, FSS may have limitations in terms of scalability and flexibility compared to techniques like DP or HE. FSS protocols may be more challenging to implement for complex neural network architectures or large-scale machine learning tasks, potentially limiting their applicability in certain scenarios. Overall, while FSS offers strong privacy guarantees and security properties, it is important to consider the potential limitations and trade-offs when choosing it as a privacy-preserving technique for machine learning tasks.

Can the idea of combining SL with FSS be applied to other collaborative machine learning techniques beyond just SL

The idea of combining Split Learning (SL) with Function Secret Sharing (FSS) can be applied to other collaborative machine learning techniques beyond just SL. By integrating FSS into collaborative learning frameworks, such as Federated Learning (FL) or Secure Multi-Party Computation (MPC), it is possible to enhance the privacy and security of the models while maintaining the collaborative nature of the training process. For FL, FSS can be used to protect the privacy of client data during the model aggregation phase, ensuring that sensitive information is not exposed to the central server. By incorporating FSS into the FL protocol, clients can securely contribute their model updates without revealing their raw data, enhancing the overall privacy guarantees of the FL framework. Similarly, in MPC-based collaborative learning scenarios, FSS can be utilized to securely compute functions on private data shared between multiple parties. By leveraging FSS protocols, the parties involved in the MPC protocol can jointly train machine learning models without compromising the privacy of their individual inputs. Overall, the concept of combining SL with FSS can be extended to various collaborative machine learning techniques, providing a robust and privacy-preserving framework for training models in distributed environments.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star