Protecting Training Data Privacy in Deep Learning Models through Differentially Private Regularization
Differential privacy can be achieved in deep learning models through a novel regularization strategy, which is more efficient and effective than the standard differentially private stochastic gradient descent (DP-SGD) algorithm.