Previous work on text generation from graph-structured data relies on pretrained language models (PLMs) and utilizes graph linearization heuristics rather than explicitly considering the graph structure. Efficiently encoding the graph structure in PLMs is challenging because they were pretrained on natural language, and modeling structured data may lead to catastrophic forgetting of distributional knowledge. In this paper, we propose StructAdapt, an adapter method to encode graph structure into PLMs. Contrary to prior work, StructAdapt effectively models interactions among the nodes based on the graph connectivity, only training graph structure-aware adapter parameters. In this way, we avoid catastrophic forgetting while maintaining the topological structure of the graph. We empirically show the benefits of explicitly encoding graph structure into PLMs using adapters and achieve state-of-the-art results on two AMR-to-text datasets, training only 5.1% of the PLM parameters.
翻译:从图形结构化数据生成文本的先前工作依赖于预先培训的语言模型(PLM), 并使用图形线性超模, 而不是明确地考虑图形结构。 有效地将PLM中的图形结构编码起来具有挑战性, 因为他们在自然语言上受过预先培训, 而建模结构化数据可能导致灾难性地忘记分布性知识。 在本文中, 我们提议采用 StructAdapt 方法, 将图形结构编码为PLM 。 与先前的工作相反, StructAdapt 有效地模拟基于图形连接的节点之间的相互作用, 仅培训图形结构- 有意识的调整器参数 。 这样, 我们避免灾难性地遗忘, 同时维护图形的表层结构 。 我们通过实验性的方式展示了使用适应器将明确编码的图形结构输入PLMS的惠益, 并在两个 AMR- t- t 数据集中实现最新的结果, 培训PLM 参数中只有 5.1%。 。