Neural models that do not rely on pre-training have excelled in the keyphrase generation task with large annotated datasets. Meanwhile, new approaches have incorporated pre-trained language models (PLMs) for their data efficiency. However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models. To fill in this knowledge gap and facilitate a more informed use of PLMs for keyphrase extraction and keyphrase generation, we present an in-depth empirical study. Formulating keyphrase extraction as sequence labeling and keyphrase generation as sequence-to-sequence generation, we perform extensive experiments in three domains. After showing that PLMs have competitive high-resource performance and state-of-the-art low-resource performance, we investigate important design choices including in-domain PLMs, PLMs with different pre-training objectives, using PLMs with a parameter budget, and different formulations for present keyphrases. Further results show that (1) in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models; (2) with a fixed parameter budget, prioritizing model depth over width and allocating more layers in the encoder leads to better encoder-decoder models; and (3) introducing four in-domain PLMs, we achieve a competitive performance in the news domain and the state-of-the-art performance in the scientific domain.
翻译:不依赖预培训的神经模型在关键词生成任务中表现得非常出色,并配有大量附加说明的数据集。与此同时,新的方法也结合了经过事先培训的语言模型(PLM)来提高数据效率。然而,缺乏系统研究如何比较这两种类型的方法,以及不同的设计选择如何影响基于PLM模型的性能。为了填补这一知识差距,便利在更知情的情况下使用PLM用于关键词提取和关键词生成,我们提出了深入的经验研究。 将关键词提取作为序列标签和关键词生成作为顺序到顺序生成,我们在三个领域进行了广泛的实验。在显示PLMs具有高资源竞争力和最先进的低资源绩效之后,我们调查了重要的设计选择,包括PLMs、具有不同培训前目标的PLMs、使用带有参数预算的PLMs、以及目前关键词生成的不同配方。进一步的结果显示:(1) 内部的BERT-类似PMs类域域可被用于构建强大和数据高效的关键词生成模型;(2) 在固定的域级预算中引入一个更精确的深度、更深层次的模型,并实现我们四级预算的域级和深度的进度。