Graph Neural Networks (GNNs) have emerged as a flexible and powerful approach for learning over graphs. Despite this success, existing GNNs are constrained by their local message-passing architecture and are provably limited in their expressive power. In this work, we propose a new GNN architecture -- the Neural Tree. The neural tree architecture does not perform message passing on the input graph but on a tree-structured graph, called the H-tree, that is constructed from the input graph. Nodes in the H-tree correspond to subgraphs in the input graph, and they are reorganized in a hierarchical manner such that a parent-node of a node in the H-tree always corresponds to a larger subgraph in the input graph. We show that the neural tree architecture can approximate any smooth probability distribution function over an undirected graph, as well as emulate the junction tree algorithm. We also prove that the number of parameters needed to achieve an $\epsilon$-approximation of the distribution function is exponential in the treewidth of the input graph, but linear in its size. We apply the neural tree to semi-supervised node classification in 3D scene graphs, and show that these theoretical properties translate into significant gains in prediction accuracy, over the more traditional GNN architectures.
翻译:神经图网络( GNNs) 已成为一种灵活而有力的透视图形的学习方法。 尽管取得了这一成功, 现有的GNNs仍然受到本地信息传递结构的制约, 并且其表达力也相当有限。 在此工作中, 我们提议一个新的 GNN 结构( 神经树 ) 。 神经树结构在输入图上并没有传递信息, 而是在从输入图中构造的树结构图上, 称为 H树。 H树的节点与输入图中的子图对应, 并且它们以等级化的方式重组, 使 H树的节点的父节点总是与输入图中的更大的子节点相匹配。 我们显示, 神经树结构可以将任何光滑的概率分布功能与非方向图相近, 以及模仿接合的树算法。 我们还证明, 实现 $\ explon$- approclum 的分布函数所需的参数数量在输入图的树枝边上呈指数指数指数指数指数, 但是在输入图中直线式的大小中, 我们应用了传统的线形图状结构的精确性, 将这些图的模型的精度转化为的模型的精确性转化为。 3 。 我们将这些图的图的模型的精确性将这些图的精确性转化为的模型转换转化为的模型转化为的模型转换成了。