The paper introduces the concept of the ridgelet transform, which is a powerful tool for analyzing the parameters of neural networks. The ridgelet transform maps a given function to the parameter distribution of a neural network, allowing for indirect analysis of the network parameters.
The key contributions of the paper are:
Explaining a systematic Fourier slice method to derive ridgelet transforms for a wide range of neural network architectures, beyond the classical fully-connected layer.
Showcasing the derivation of ridgelet transforms for four specific cases:
Demonstrating that the reconstruction formula S[R[f]] = f holds for the derived ridgelet transforms, which provides a constructive proof of the universal approximation theorem for the corresponding neural network architectures.
Highlighting the advantages of the integral representation and ridgelet transform approach, such as the linearization and convexification of neural networks, as well as the ability to handle a wide range of activation functions.
The paper aims to unify and extend the existing results on the ridgelet transform for neural networks, providing a systematic framework for analyzing the parameters of modern neural network architectures.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Sho Sonoda,I... a las arxiv.org 04-22-2024
https://arxiv.org/pdf/2402.15984.pdfConsultas más profundas