In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.
翻译:在本文中,从理论角度,我们研究强大的图形神经网络(GNNs)是如何用来学习组合问题的近似算法的。 为此,我们首先建立一个新的GNNs类别,可以严格地解决比现有GNNs更广泛的问题。 然后,我们弥合GNN理论和分布式本地算法理论之间的差距,从理论上证明最强大的GNN可以学习最小支配性设定问题的近似算法,而最小的顶点(GNNs)能够以某些近似率来覆盖问题,而没有哪个GNN能够比这些比率更出色。本文是第一个为组合问题阐明GNNs近似比率的论文。此外,我们证明在每一个节点中增加颜色或色色弱可以改善这些近似率。这说明预处理和工程在理论上强化模型能力。