Combining machine learning and constrained optimization, Predict+Optimize tackles optimization problems containing parameters that are unknown at the time of solving. Prior works focus on cases with unknowns only in the objectives. A new framework was recently proposed to cater for unknowns also in constraints by introducing a loss function, called Post-hoc Regret, that takes into account the cost of correcting an unsatisfiable prediction. Since Post-hoc Regret is non-differentiable, the previous work computes only its approximation. While the notion of Post-hoc Regret is general, its specific implementation is applicable to only packing and covering linear programming problems. In this paper, we first show how to compute Post-hoc Regret exactly for any optimization problem solvable by a recursive algorithm satisfying simple conditions. Experimentation demonstrates substantial improvement in the quality of solutions as compared to the earlier approximation approach. Furthermore, we show experimentally the empirical behavior of different combinations of correction and penalty functions used in the Post-hoc Regret of the same benchmarks. Results provide insights for defining the appropriate Post-hoc Regret in different application scenarios.
翻译:将机器学习和限制优化相结合, 预测+优化会解决包含在解决时未知参数的优化问题。 先前的工作只侧重于目标中未知的病例。 最近提出了一个新框架,通过引入一个损失函数(称为Hoc Regret)来迎合未知因素,其中考虑到纠正不满意预测的成本。 由于休克 Regret后无法区分, 先前的工作只计算其近似值。 虽然 Post-hoc Regret的概念是一般性的, 但其具体实施只适用于包装和涵盖线性编程问题。 在本文中,我们首先展示如何精确地为符合简单条件的累合算法所允许的任何优化问题计算 Post-hoc Regret。 实验表明解决方案的质量与先前的近似法相比有了显著的改善。 此外, 我们实验性地展示了同一基准在Hoc Regret后使用的不同校正和惩罚功能组合的经验行为。 研究结果为不同应用情景中界定适当的后Regret提供了深刻的洞察力。</s>