In this paper, we propose an explanation of representation for self-attention network (SAN) based neural sequence encoders, which regards the information captured by the model and the encoding of the model as graph structure and the generation of these graph structures respectively. The proposed explanation applies to existing works on SAN-based models and can explain the relationship among the ability to capture the structural or linguistic information, depth of model, and length of sentence, and can also be extended to other models such as recurrent neural network based models. We also propose a revisited multigraph called Multi-order-Graph (MoG) based on our explanation to model the graph structures in the SAN-based model as subgraphs in MoG and convert the encoding of SAN-based model to the generation of MoG. Based on our explanation, we further introduce a Graph-Transformer by enhancing the ability to capture multiple subgraphs of different orders and focusing on subgraphs of high orders. Experimental results on multiple neural machine translation tasks show that the Graph-Transformer can yield effective performance improvement.
翻译:在本文中,我们提议对基于自留网络的神经序列编码器进行自我注意网络(SAN)的表述解释,其中分别将模型收集的信息和模型编码为图形结构以及这些图形结构的生成。提议的解释适用于基于SAN模型的现有工作,并可以解释获取结构或语言信息的能力、模型深度和刑期长度之间的关系,还可以扩大到其他模式,如基于神经网络的经常性模型。我们还提议根据我们的解释,将基于SAN模型的图形结构建模为MOG的子集,并将基于SAN模型的模型编码转换为MOG的生成。根据我们的解释,我们进一步引入了“图变”法,方法是加强获取不同命令的多个子集的能力,并侧重于高调的子集。关于多个神经-格拉夫(MoG)的实验结果显示,图形转换系统能够产生有效的性能改进。</s>