In the last years predict-and-optimize approaches (Elmachtoub and Grigas 2021; Wilder, Dilkina, and Tambe 2019) have received increasing attention. These problems have the settings where the predictions of predictive machine learning (ML) models are fed to downstream optimization problems for decision making. Predict-and-optimize approaches propose to train the ML models, often neural network models, by directly optimizing the quality of decisions made by the optimization solvers. However, one major bottleneck of predict-and-optimize approaches is solving the optimization problem for each training instance at every epoch. To address this challenge, Mulamba et al. (2021) propose noise contrastive estimation by caching feasible solutions. In this work, we show the noise contrastive estimation can be considered a case of learning to rank the solution cache. We also develop pairwise and listwise ranking loss functions, which can be differentiated in closed form without the need of solving the optimization problem. By training with respect to these surrogate loss function, we empirically show that we are able to minimize the regret of the predictions.
翻译:在过去几年里,预测和优化方法(Elmachtoub和Grigas 2021年;Wilder、Dilkina和Tambe 2019年);这些问题使预测机器学习(ML)模型的预测成为下游优化决策问题。预测和优化方法提议培训ML模型,通常是神经网络模型,直接优化优化优化优化解决方案做出的决定的质量。然而,预测和优化方法的一个重大瓶颈是解决每个时代每个培训实例的优化问题。为了应对这一挑战,Mulamba等人(2021年)提出了噪音对比性估算,提出了可行的解决方案。在这项工作中,我们展示了噪音对比性估算可被视为学习将解决方案缓存排序的案例。我们还开发了双对称和列表错误的排序损失功能,这些功能可以以封闭形式进行区分,而无需解决优化问题。通过对替代损失功能的培训,我们从经验上表明我们能够最大限度地减少预测的遗憾。