Theoretical Foundations and Empirical Validation of Sparse Neural Network Optimization using Iterative Hard Thresholding
Sparse neural networks can be effectively learned by leveraging the theoretical foundations of iterative hard thresholding (IHT) optimization, which can identify and learn the locations of nonzero parameters in a neural network.