Graph Neural Networks (GNNs) are popular models for graph learning problems. GNNs show strong empirical performance in many practical tasks. However, the theoretical properties have not been completely elucidated. In this paper, we investigate whether GNNs can exploit the graph structure from the perspective of the expressive power of GNNs. In our analysis, we consider graph generation processes that are controlled by hidden node features, which contain all information about the graph structure. A typical example of this framework is kNN graphs constructed from the hidden features. In our main results, we show that GNNs can recover the hidden node features from the input graph alone, even when all node features, including the hidden features themselves and any indirect hints, are unavailable. GNNs can further use the recovered node features for downstream tasks. These results show that GNNs can fully exploit the graph structure by themselves, and in effect, GNNs can use both the hidden and explicit node features for downstream tasks. In the experiments, we confirm the validity of our results by showing that GNNs can accurately recover the hidden features using a GNN architecture built based on our theoretical analysis.
翻译:图形神经网络( GNN) 是用于图形学习问题的流行模型。 GNNs 显示在许多实际任务中具有很强的经验性表现。 但是, 理论属性尚未完全阐明。 在本文中, 我们调查 GNNs 是否可以从GNNs 表达力的角度利用图形结构。 我们的分析认为, 由隐藏节点特性控制的图形生成过程包含关于图形结构的所有信息。 这个框架的一个典型例子是从隐藏的特性中构建的 kNN 图形。 在我们的主要结果中, 我们显示 GNNs 能够单独从输入图形中恢复隐藏的节点特性, 即使所有节点特征, 包括隐藏的特性本身和任何间接提示都不存在。 GNNs 可以进一步利用回收的节点特性执行下游任务。 这些结果显示, GNNNs 可以自己充分利用图形结构, 实际上, GNNs 可以使用隐藏的和明确的节点特性来完成下游任务。 在实验中, 我们确认我们的结果的有效性, 我们通过显示 GNNes 能够使用基于我们理论分析建立的 GNNN 结构来准确恢复隐藏的特性。