Mixed-integer programming (MIP) technology offers a generic way of formulating and solving combinatorial optimization problems. While generally reliable, state-of-the-art MIP solvers base many crucial decisions on hand-crafted heuristics, largely ignoring common patterns within a given instance distribution of the problem of interest. Here, we propose MIP-GNN, a general framework for enhancing such solvers with data-driven insights. By encoding the variable-constraint interactions of a given mixed-integer linear program (MILP) as a bipartite graph, we leverage state-of-the-art graph neural network architectures to predict variable biases, i.e., component-wise averages of (near) optimal solutions, indicating how likely a variable will be set to 0 or 1 in (near) optimal solutions of binary MILPs. In turn, the predicted biases stemming from a single, once-trained model are used to guide the solver, replacing heuristic components. We integrate MIP-GNN into a state-of-the-art MIP solver, applying it to tasks such as node selection and warm-starting, showing significant improvements compared to the default setting of the solver on two classes of challenging binary MILPs.
翻译:混合内插编程( MIP) 技术为组合内优化问题的制定和解决提供了一种通用的方法。 虽然一般而言可靠、最先进的 MIP 解析器在手工制作的超光速学上有许多关键决定, 基本上忽略了兴趣问题在特定实例分布中的共同模式。 在这里, 我们提议了 MIP- GNN, 一个用数据驱动的洞察力加强这种解析器的总体框架。 通过将特定混合内插线程序( MILP) 的变量- 限制互动编译为双面图, 我们利用最先进的图像神经网络结构来预测可变偏差的偏差, 即( 近) 最佳解决方案的构件偏差平均值, 表明一个变数在( 近) 双轨 MILP 的最佳解决方案中可能设为 0 或 1 。 反过来, 一个经过培训的模型的预测偏差被用来引导解答器, 取代超音化组件。 我们将MIP- GNNN 整合成一个最先进的MIP 质神经网络结构, 将它应用到一个最先进的MIP 解算的状态, 的顶级的升级的解算算, 将它用来显示两级的变暖化的进度, 以显示为不易变的预置的进度。