Core Concepts
Proposing Batch-oriented Element-wise Approximate Activation to enhance privacy and utility in PPNN.
Abstract
The study introduces a novel approach, BEAA, for Privacy-Preserving Neural Networks. It focuses on element-wise data packing and trainable approximate activation to reduce accuracy loss caused by approximation errors. The method allows for concurrent inference on large batches of images, improving utility ratio of ciphertext slots. Knowledge distillation is incorporated to enhance inference accuracy. Experiment results show improved accuracy and reduced amortized time compared to existing methods.
Stats
Experiment results show an improvement of 1.65% in inference accuracy with BEAA compared to the most efficient channel-wise method.
The total inference time for BEAA is significantly longer than other methods, exceeding 3130 seconds.
The amortized time per image with BEAA is approximately 0.764 seconds.