The paper introduces EvoMAL, a novel framework for learning symbolic loss functions through genetic programming and unrolled differentiation. It aims to improve convergence, sample efficiency, and inference performance across various supervised learning tasks. The results demonstrate superior performance compared to baseline methods in terms of both in-sample and out-of-sample tasks.
The study focuses on the development of efficient techniques for optimizing loss functions in machine learning models. By combining genetic programming with gradient-based approaches, the proposed framework achieves significant improvements in performance metrics such as mean squared error and error rate across different datasets.
Key contributions include the design of a task and model-agnostic search space for symbolic loss functions, the integration of local search mechanisms into the optimization process, and the successful application of EvoMAL to diverse supervised learning tasks. The results highlight the potential of EvoMAL to enhance the efficiency and effectiveness of loss function learning in machine learning algorithms.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Christian Ra... في arxiv.org 03-05-2024
https://arxiv.org/pdf/2403.00865.pdfاستفسارات أعمق