In Nature Machine Intelligence 4, 367 (2022), Schuetz et al provide a scheme to employ graph neural networks (GNN) as a heuristic to solve a variety of classical, NP-hard combinatorial optimization problems. It describes how the network is trained on sample instances and the resulting GNN heuristic is evaluated applying widely used techniques to determine its ability to succeed. Clearly, the idea of harnessing the powerful abilities of such networks to ``learn'' the intricacies of complex, multimodal energy landscapes in such a hands-off approach seems enticing. And based on the observed performance, the heuristic promises to be highly scalable, with a computational cost linear in the input size $n$, although there is likely a significant overhead in the pre-factor due to the GNN itself. However, closer inspection shows that the reported results for this GNN are only minutely better than those for gradient descent and get outperformed by a greedy algorithm, for example, for Max-Cut. The discussion also highlights what I believe are some common misconceptions in the evaluations of heuristics.
翻译:在自然机智4、367(2022)中,Shuetz等人提出一个计划,利用图形神经网络(GNN)解决各种古老、NP硬组合优化问题。它描述了网络如何在样本实例方面接受培训,并因此对GNN热量进行了评估,运用了广泛使用的技术来确定其成功能力。显然,利用这些网络的强大能力“清除”复杂、多式多式能源环境在这种亲手方式中的错综复杂性似乎令人羡慕。根据观察到的绩效,超常性承诺高度可伸缩,投入值为10美元,尽管由于GNNN本身,预产值中很可能有大量的间接成本。然而,更仔细的检查表明,GNN的所报告的结果只比梯子血统好一分钟,而且被贪婪的算法(例如,Max-Cut)所超越。讨论还突出表明,我认为,在对超值的估算中有一些常见的错误。