Unveiling the Potential of Large Language Models in Mathematical Optimization Problems
The author explores the performance of various Large Language Models (LLMs) in formulating optimization problems from natural language descriptions, highlighting GPT-4's superior performance and the limitations of smaller models like Llama-2-7b. The research introduces a progressive fine-tuning framework, LM4OPT, to enhance Llama-2-7b's specificity for this task.