Graph neural networks (GNNs) have recently shown good performance in various fields. In this paper, we propose graph tree neural networks (GTNNs) designed to solve the problems of existing networks by analyzing the structure of human neural networks. In GTNNs, information units are related to the form of a graph and then they become a bigger unit of information again and have a relationship with other information units. At this point, the unit of information is a set of neurons, and we can express it as a vector with GTNN. Defining the starting and ending points in a single graph is difficult, and a tree cannot express the relationship among sibling nodes. However, a graph tree can be expressed using leaf and root nodes as its starting and ending points and the relationship among sibling nodes. Depth-first convolution (DFC) encodes the interaction result from leaf nodes to the root node in a bottom-up approach, and depth-first deconvolution (DFD) decodes the interaction result from the root node to the leaf nodes in a top-down approach. GTNN is data-driven learning in which the number of convolutions varies according to the depth of the tree. Moreover, learning features of different types together is possible. Supervised, unsupervised, and semi-supervised learning using graph tree recursive neural network (GTR) , graph tree recursive attention networks (GTRAs), and graph tree recursive autoencoders (GTRAEs) are introduced in this paper. We experimented with a simple toy test with source code dataset.
翻译:图像神经网络( GNNS) 最近在多个领域表现良好 。 在本文中, 我们提出图形树神经网络( GNNS), 目的是通过分析人类神经网络的结构来解决现有网络的问题。 在 GNNS 中, 信息单位与图形的形式相关, 然后它们又成为更大的信息单位, 并与其他信息单位有关系 。 在此点, 信息单位是一组神经元, 我们可以将其表现为 GTNNN 的矢量。 在单一图表中, 定义起始点和终点点是困难的, 树不能表达 sibling节点之间的关系。 然而, 图表树树树树的树可以以叶和根节点作为它的起始点和终点, 以及 sibling 节点之间的关系。 深度- 第一次调色调( DFC) 将叶节到根节的相互作用结果编码, 以及 深度解析点( DFDDD) 解介点到叶节点, 树的根节点和叶节点之间的关系。 GTGNNNN可以表示 的树的直线网络, 学习这个不易的变数, 。 GDNDNURL 和OVI 的变 。 。 。 正在 学习 的 的 的变 的 的 和 的树的图的变 。, 和 的变的变的变的变的 。