This research paper presents new findings on the universal approximation capabilities of narrow deep neural networks, particularly those with leaky ReLU activations, and explores their implications for autoencoders and normalizing flows.
비다항식 활성화 함수를 사용하는 단일 은닉 계층 신경망은 비고정 영역을 포함한 다양한 함수 공간에서 보편적 근사 특성을 갖는다. 즉, 이러한 신경망은 가중 공간, Lp 공간, (가중) Sobolev 공간과 같은 광범위한 함수 공간에서 임의의 함수를 원하는 정확도로 근사할 수 있다.
Neural networks with non-polynomial activation functions possess the capability to approximate any continuous function on non-compact domains, including unbounded domains, with arbitrary accuracy.
This paper presents a novel, constructive proof for the universal approximation theorem of neural networks with joint-group-equivariant feature maps, unifying the understanding of approximation capabilities for both shallow and deep networks.