While Mixed-integer linear programming (MILP) is NP-hard in general, practical MILP has received roughly 100--fold speedup in the past twenty years. Still, many classes of MILPs quickly become unsolvable as their sizes increase, motivating researchers to seek new acceleration techniques for MILPs. With deep learning, they have obtained strong empirical results, and many results were obtained by applying graph neural networks (GNNs) to making decisions in various stages of MILP solution processes. This work discovers a fundamental limitation: there exist feasible and infeasible MILPs that all GNNs will, however, treat equally, indicating GNN's lacking power to express general MILPs. Then, we show that, by restricting the MILPs to unfoldable ones or by adding random features, there exist GNNs that can reliably predict MILP feasibility, optimal objective values, and optimal solutions up to prescribed precision. We conducted small-scale numerical experiments to validate our theoretical findings.
翻译:虽然混合内向线性编程(MILP)一般是NP硬的,但实际的MILP在过去二十年中得到了大约100倍的加速。尽管如此,许多类MILP随着其体积的增加而迅速变得无法解决,激励研究人员为MILP寻找新的加速技术。经过深层次的学习,它们取得了强有力的经验结果,并且通过应用图形神经网络(GNN)在MILP解决方案的各个阶段作出决定而取得了许多成果。这项工作发现了一个根本性的局限性:存在一些可行和不可行的MILP,然而,所有GNN将同等对待它们,表明GNN缺乏表达一般MILP的力量。然后,我们表明,通过将MILP限制为可展开的技术,或者通过添加随机的特性,存在着一些GNNN能够可靠地预测MILP可行性、最佳客观价值和达到规定精确度的最佳解决办法。我们进行了小规模的实验,以验证我们的理论结论。