Graphs are an essential part of many machine learning problems such as analysis of parse trees, social networks, knowledge graphs, transportation systems, and molecular structures. Applying machine learning in these areas typically involves learning the graph structure and the relationship between the nodes of the graph. However, learning the graph structure is often complex, particularly when the graph is cyclic, and the transitions from one node to another are conditioned such as graphs used to represent a finite state machine. To solve this problem, we propose to extend the memory based Neural Turing Machine (NTM) with two novel additions. We allow for transitions between nodes to be influenced by information received from external environments, and we let the NTM learn the context of those transitions. We refer to this extension as the Conditional Neural Turing Machine (CNTM). We show that the CNTM can infer conditional transition graphs by empirically verifiying the model on two data sets: a large set of randomly generated graphs, and a graph modeling the information retrieval process during certain crisis situations. The results show that the CNTM is able to reproduce the paths inside the graph with accuracy ranging from 82,12% for 10 nodes graphs to 65,25% for 100 nodes graphs.
翻译:图表是许多机器学习问题的重要部分, 比如分析剖析树、 社交网络、 知识图表、 运输系统和分子结构。 在这些地区应用机器学习通常需要学习图形结构以及图形节点之间的关系。 但是, 学习图形结构往往很复杂, 特别是当图形是循环的, 从一个节点向另一个节点的过渡是有条件的, 例如用于代表一个有限状态机器的图表。 为了解决这个问题, 我们提议扩展基于内存的神经图象机( NTM), 并增加两个新的内容。 我们允许节点之间的转换受到外部环境信息的影响, 我们让 NTM 学习这些转变的背景。 我们把这个扩展称为条件神经图解机器( CNTM ) 。 我们显示, CNTM 可以通过实验性地验证两个数据集的模型来推断有条件的过渡图。 一个大系列随机生成的图, 以及一个图表模型模拟某些危机局势中的信息检索过程。 结果显示, CNTM 能够复制这些节点的图中路径为 82%, 无法复制图内图的精确度为 。