Core Concepts
Generating accurate piece-wise polynomial approximations of complex activation functions for secure computation.
Abstract
The article introduces Compact, a method to approximate complex activation functions for secure multi-party computation. It addresses the inefficiency of current techniques in handling non-linear activation functions in deep neural networks. By using piece-wise polynomial approximations, Compact achieves near-identical model accuracy without imposing restrictions on model training. The approach incorporates input density awareness and optimization techniques to generate efficient approximations. Extensive evaluations show that Compact is faster than existing methods for DNN models with many hidden layers while maintaining negligible accuracy loss.
Stats
State-of-the-art approaches are 2รโ5ร slower than Compact for DNN models with many hidden layers.
Approximation error threshold is set at ๐ = 10^-2.
Quotes
"We present Compact, which produces piece-wise polynomial approximations of complex activation functions that can be used with state-of-the-art MPC techniques."
"Our work accelerates easy adoption of MPC techniques to provide user data privacy even when the queried DNN models consist of a number of hidden layers and complex activation functions."