Text generation from Abstract Meaning Representation (AMR) has substantially benefited from the popularized Pretrained Language Models (PLMs). Myriad approaches have linearized the input graph as a sequence of tokens to fit the PLM tokenization requirements. Nevertheless, this transformation jeopardizes the structural integrity of the graph and is therefore detrimental to its resulting representation. To overcome this issue, Ribeiro et al. have recently proposed StructAdapt, a structure-aware adapter which injects the input graph connectivity within PLMs using Graph Neural Networks (GNNs). In this paper, we investigate the influence of Relative Position Embeddings (RPE) on AMR-to-Text, and, in parallel, we examine the robustness of StructAdapt. Through ablation studies, graph attack and link prediction, we reveal that RPE might be partially encoding input graphs. We suggest further research regarding the role of RPE will provide valuable insights for Graph-to-Text generation.
翻译:从“抽象表示”中生成的文字从广受欢迎的“预先语言模型”中获益匪浅。Myriad 方法将输入图线性化为符合PLM象征性要求的标志序列,然而,这种转变危及图的结构完整性,因此对由此形成的表达方式有害。为了克服这一问题,Ribeiro等人最近提议StractAdapt, 这是一种有结构意识的适应器,它利用图象神经网络(GNNSs)在PLM内输入输入输入图的连接。我们在本文件中调查了相对位置嵌入器(RPE)对AMR到图文的影响,同时我们研究了StructAdapt的稳健性。我们通过反通货膨胀研究、图形攻击和链接预测,发现RPE可能是部分编码输入图。我们建议进一步研究RPE的作用,它将为图形到图解的一代提供宝贵的洞察力。