One of the key problems of GNNs is how to describe the importance of neighbor nodes in the aggregation process for learning node representations. A class of GNNs solves this problem by learning implicit weights to represent the importance of neighbor nodes, which we call implicit GNNs such as Graph Attention Network. The basic idea of implicit GNNs is to introduce graph information with special properties followed by Learnable Transformation Structures (LTS) which encode the importance of neighbor nodes via a data-driven way. In this paper, we argue that LTS makes the special properties of graph information disappear during the learning process, resulting in graph information unhelpful for learning node representations. We call this phenomenon Graph Information Vanishing (GIV). Also, we find that LTS maps different graph information into highly similar results. To validate the above two points, we design two sets of 70 random experiments on five Implicit GNNs methods and seven benchmark datasets by using a random permutation operator to randomly disrupt the order of graph information and replacing graph information with random values. We find that randomization does not affect the model performance in 93\% of the cases, with about 7 percentage causing an average 0.5\% accuracy loss. And the cosine similarity of output results, generated by LTS mapping different graph information, over 99\% with an 81\% proportion. The experimental results provide evidence to support the existence of GIV in Implicit GNNs and imply that the existing methods of Implicit GNNs do not make good use of graph information. The relationship between graph information and LTS should be rethought to ensure that graph information is used in node representation.
翻译:GNNs 的关键问题之一是如何描述相邻节点在学习节点表示的聚合过程中的重要性。 一组GNNs通过学习隐含的权重来学习隐含的权重来表达邻居节点的重要性, 我们称之为“ 图形注意网络 ” 。 隐含 GNNs 的基本想法是引入带有特殊属性的图形信息, 由可学习的转型结构( LTS ) 以数据驱动的方式来说明相邻节点的重要性。 在本文中, 我们主张 LTS 使图形信息的特殊性在学习过程中消失, 导致图形信息对学习节点表示无益。 我们称之为“ 图形信息消逝” ( GIV ), 以显示相隐含的 GNNNPs 。 为了验证上述两点, 我们设计了两套70 随机的随机性实验, 用数据来随机性地扰乱图表信息顺序, 用随机值取代图表信息。 我们发现, 随机化不会影响图表信息在 93_ 图像 信息消化 (G_ ) 的模型运行结果, 用一个类似比例的G_ 的精确度, 用 L_ 的G_ 的数值, 用一个普通的精确值提供的数值, 10% 的模型的正确度, 的正确度使用一个普通的数值, 的正确度为 L_ 。