Due to the success of the pre-trained language model (PLM), existing PLM-based summarization models show their powerful generative capability. However, these models are trained on general-purpose summarization datasets, leading to generated summaries failing to satisfy the needs of different readers. To generate summaries with topics, many efforts have been made on topic-focused summarization. However, these works generate a summary only guided by a prompt comprising topic words. Despite their success, these methods still ignore the disturbance of sentences with non-relevant topics and only conduct cross-interaction between tokens by attention module. To address this issue, we propose a topic-arc recognition objective and topic-selective graph network. First, the topic-arc recognition objective is used to model training, which endows the capability to discriminate topics for the model. Moreover, the topic-selective graph network can conduct topic-guided cross-interaction on sentences based on the results of topic-arc recognition. In the experiments, we conduct extensive evaluations on NEWTS and COVIDET datasets. Results show that our methods achieve state-of-the-art performance.
翻译:由于培训前语言模式(PLM)的成功,现有基于PLM的汇总模型显示了其强大的基因化能力,然而,这些模型在普通用途汇总数据集方面得到了培训,导致生成摘要无法满足不同读者的需要。为生成专题摘要,在专题重点汇总方面作出了许多努力。然而,这些作品仅以由专题词组成的快速组合为指导,产生摘要。尽管这些方法取得了成功,但仍然忽视了刑罚与非相关专题的干扰,并且仅通过关注模块在标牌之间进行交叉互动。为解决这一问题,我们提议了一个主题锁定目标和主题选择图集网络。首先,主题认知目标被用于示范培训,从而赋予了该模型区分专题的能力。此外,专题选择性图表网络可以根据专题识别结果对判决进行专题指导的交叉互动。在实验中,我们对NECTS和COVIDET数据集进行了广泛的评估。结果显示,我们的方法达到了最新业绩。</s>