The recent work ``Combinatorial Optimization with Physics-Inspired Graph Neural Networks'' [Nat Mach Intell 4 (2022) 367] introduces a physics-inspired unsupervised Graph Neural Network (GNN) to solve combinatorial optimization problems on sparse graphs. To test the performances of these GNNs, the authors of the work show numerical results for two fundamental problems: maximum cut and maximum independent set (MIS). They conclude that "the graph neural network optimizer performs on par or outperforms existing solvers, with the ability to scale beyond the state of the art to problems with millions of variables." In this comment, we show that a simple greedy algorithm, running in almost linear time, can find solutions for the MIS problem of much better quality than the GNN. The greedy algorithm is faster by a factor of $10^4$ with respect to the GNN for problems with a million variables. We do not see any good reason for solving the MIS with these GNN, as well as for using a sledgehammer to crack nuts. In general, many claims of superiority of neural networks in solving combinatorial problems are at risk of being not solid enough, since we lack standard benchmarks based on really hard problems. We propose one of such hard benchmarks, and we hope to see future neural network optimizers tested on these problems before any claim of superiority is made.
翻译:最近的“ 与物理启发的图形神经网络的数学优化” 工作( Nat Mach Intell 4 (2022 2022 ) 367 ) 引入了一个物理学启发的、不受监督的图形神经网络( GNNN), 以解决稀释图形中的组合优化问题。 为了测试这些GNN的性能, 作品作者展示了两个基本问题的数字结果: 最大削减和最大独立集( MIS ) 。 他们的结论是, “ 图形神经网络优化器在等效或超效现有解决器上运行, 有能力超越最新状态, 解决数百万变量的问题 ” 。 在本评论中, 我们显示, 一个简单的贪婪算法, 几乎在线性时间里运行, 可以找到比 GNN( GNN) 质量更好的管理器问题解决方案。 贪婪算法比GNN( GNN ) 高10 4 的系数更快。 对于一个百万 变量( MIS ) 的问题, 我们看不出任何合理的理由可以用这些GNNN 来解决MI,,, 以及 使用 螺旋锤 坚和 坚固 坚固的神经 网络 问题 的 标准 问题, 。