The predict+optimize problem combines machine learning ofproblem coefficients with a combinatorial optimization prob-lem that uses the predicted coefficients. While this problemcan be solved in two separate stages, it is better to directlyminimize the optimization loss. However, this requires dif-ferentiating through a discrete, non-differentiable combina-torial function. Most existing approaches use some form ofsurrogate gradient. Demirovicet alshowed how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function. How-ever, their approach is restricted to optimization problemswith a dynamic programming formulation. In this work wepropose a novel divide and conquer algorithm to tackle op-timization problems without this restriction and predict itscoefficients using the optimization loss. We also introduce agreedy version of this approach, which achieves similar re-sults with less computation. We compare our approach withother approaches to the predict+optimize problem and showwe can successfully tackle some hard combinatorial problemsbetter than other predict+optimize methods.
翻译:预测+优化问题结合了机器对问题系数的学习以及使用预测系数的组合优化Prob-lem 。 虽然这个问题可以分两个阶段解决, 但直接最小化优化损失更好。 但是, 这需要通过一个离散、 不区分的组合式功能进行分解。 大多数现有方法使用某种形式的代谢梯度。 Demirovicet 上传如何直接表达优化问题的损失, 将预设系数作为一个小片线性函数。 如何将它们的方法局限于以动态的编程配方优化问题。 在这项工作中, 我们提出新的分歧, 并采用算法, 解决不设此限制的点化问题, 并用优化损失预测来预测其组合效率。 我们还引入了这一方法的商定版本, 其实现的二次变压率相似, 且计算较少。 我们比较了我们的方法与预测+opimizized 问题的其他方法, 并展示我们能够成功解决比其他预测+opimizized方法更难的硬调问题。