Optimization modeling is fundamental to decision-making across diverse domains. Despite progress in automating optimization formulation from natural language descriptions, Large Language Models (LLMs) often struggle to generate formally correct and usable models against hallucinations, posing a challenge for reliable automation. Inspired by the success of Reinforcement Learning (RL) in enhancing Large Reasoning Models, we present Solver-Informed Reinforcement Learning (SIRL), a novel framework that significantly improves the authenticity of LLMs for optimization modeling using Reinforcement Learning with Verifiable Reward by leveraging external optimization solvers as verifiers. These verifiers automatically assess the executable code and the instance-level mathematical model represented by the associated LP file, yielding precise and comprehensive feedback signals -- including syntax, feasibility, and solution quality, serving as direct rewards for the RL process. This automated verification process, particularly from classic optimization solvers, also underpins our instance-enhanced self-consistency method to synthesize high-quality training data. Extensive experiments on diverse public benchmarks demonstrate that SIRL achieves state-of-the-art performance, substantially outperforming existing methods in generating accurate and executable optimization models. Our code is publicly available at https://github.com/Cardinal-Operations/SIRL.
翻译:优化建模是跨领域决策制定的基础。尽管从自然语言描述自动生成优化模型的研究已取得进展,但大语言模型(LLMs)在生成形式正确且可用的模型时仍常受幻觉问题困扰,这为可靠自动化带来了挑战。受强化学习(RL)在提升大推理模型性能方面成功的启发,我们提出了求解器知情强化学习(SIRL)——一种利用外部优化求解器作为验证器、通过可验证奖励的强化学习显著提升大语言模型优化建模真实性的新型框架。这些验证器自动评估可执行代码及由关联LP文件表示的实例级数学模型,产生精确且全面的反馈信号(包括语法正确性、可行性与解质量),作为强化学习过程的直接奖励。这种自动化验证流程(尤其基于经典优化求解器)同时支撑了我们提出的实例增强自洽方法,用于合成高质量训练数据。在多样化公共基准测试上的大量实验表明,SIRL实现了最先进的性能,在生成准确且可执行的优化模型方面显著优于现有方法。我们的代码已公开于 https://github.com/Cardinal-Operations/SIRL。