The paper introduces EvoMAL, a novel framework for learning symbolic loss functions through genetic programming and unrolled differentiation. It aims to improve convergence, sample efficiency, and inference performance across various supervised learning tasks. The results demonstrate superior performance compared to baseline methods in terms of both in-sample and out-of-sample tasks.
The study focuses on the development of efficient techniques for optimizing loss functions in machine learning models. By combining genetic programming with gradient-based approaches, the proposed framework achieves significant improvements in performance metrics such as mean squared error and error rate across different datasets.
Key contributions include the design of a task and model-agnostic search space for symbolic loss functions, the integration of local search mechanisms into the optimization process, and the successful application of EvoMAL to diverse supervised learning tasks. The results highlight the potential of EvoMAL to enhance the efficiency and effectiveness of loss function learning in machine learning algorithms.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問