This article introduces a novel optimization technique, Anisotropic Gaussian Smoothing (AGS), which enhances traditional gradient-based optimization algorithms (GD, SGD, Adam) by employing a non-local gradient derived from anisotropic Gaussian smoothing, enabling them to effectively escape suboptimal local minima and improve convergence.
본 논문에서는 랜덤 함수 프레임워크를 통해 기존의 그레이디언트 기반 최적화 방법을 재해석하고, 이를 통해 새로운 최적화 알고리즘인 랜덤 함수 하강(RFD)을 제안합니다. RFD는 베이지안 최적화 이론을 기반으로 스텝 크기를 자동으로 조절하여, 고차원에서도 효율적이고 설명 가능한 최적화를 가능하게 합니다.
This paper introduces Random Function Descent (RFD), a novel optimization algorithm derived from a "random function" framework that provides a theoretical foundation for understanding and selecting step sizes in gradient-based optimization, offering advantages over traditional convex optimization approaches.
This research paper evaluates the performance of six classical gradient-based optimization techniques, highlighting their strengths and weaknesses across different objective functions, and emphasizing the importance of initial point selection and the challenges posed by nonlinearity and multimodality.