Graph neural networks (GNNs) are powerful machine learning models for various graph learning tasks. Recently, the limitations of the expressive power of various GNN models have been revealed. For example, GNNs cannot distinguish some non-isomorphic graphs and they cannot learn efficient graph algorithms. In this paper, we demonstrate that GNNs become powerful just by adding a random feature to each node. We prove that the random features enable GNNs to learn almost optimal polynomial-time approximation algorithms for the minimum dominating set problem and maximum matching problem in terms of approximation ratios. The main advantage of our method is that it can be combined with off-the-shelf GNN models with slight modifications. Through experiments, we show that the addition of random features enables GNNs to solve various problems that normal GNNs, including the graph convolutional networks (GCNs) and graph isomorphism networks (GINs), cannot solve.
翻译:图形神经网络( GNNs) 是各种图形学习任务的强大机器学习模型。 最近, 各种 GNN 模型的表达力的局限性已经暴露出来。 例如, GNNs 无法区分某些非异形图形, 也无法学习高效的图形算法。 在本文中, 我们通过在每个节点添加随机特性来证明 GNNs 变得强大。 我们证明随机特性使 GNNs 能够学习几乎最佳的多元时近似算法, 以了解最小的支配性设定问题和近似率方面的最大匹配问题。 我们方法的主要优点是, 它可以与现成的 GNNN 模型结合, 并稍作修改。 我们通过实验, 显示随机特性的增加使 GNNs 能够解决正常的 GNN( 包括图形相动网络和图形等) 等各种问题。