Automatic text summarization aims at condensing a document to a shorter version while preserving the key information. Different from extractive summarization which simply selects text fragments from the document, abstractive summarization generates the summary in a word-by-word manner. Most current state-of-the-art (SOTA) abstractive summarization methods are based on the Transformer-based encoder-decoder architecture and focus on novel self-supervised objectives in pre-training. While these models well capture the contextual information among words in documents, little attention has been paid to incorporating global semantics to better fine-tune for the downstream abstractive summarization task. In this study, we propose a topic-aware abstractive summarization (TAAS) framework by leveraging the underlying semantic structure of documents represented by their latent topics. Specifically, TAAS seamlessly incorporates a neural topic modeling into an encoder-decoder based sequence generation procedure via attention for summarization. This design is able to learn and preserve global semantics of documents and thus makes summarization effective, which has been proved by our experiments on real-world datasets. As compared to several cutting-edge baseline methods, we show that TAAS outperforms BART, a well-recognized SOTA model, by 2%, 8%, and 12% regarding the F measure of ROUGE-1, ROUGE-2, and ROUGE-L, respectively. TAAS also achieves comparable performance to PEGASUS and ProphetNet, which is difficult to accomplish given that training PEGASUS and ProphetNet requires enormous computing capacity beyond what we used in this study.
翻译:自动文本总和旨在将文档凝固为较短版本,同时保存关键信息。与简单地从文档中选择文本碎片的提取总和不同,抽象总和以逐字方式生成摘要。目前大多数最新工艺(SOTA)的抽象总和方法基于基于基于变换器的编码器脱色器结构,并侧重于培训前的新颖自我监督的目标。这些模型很好地捕捉了文件中文字的背景信息,但很少注意将全球语义整合为更精细的精细结构,以更好地完成下游的抽象合成任务。在本研究中,我们建议一个主题认知抽象的抽象合成(TAAS)框架,利用以其潜在主题为代表的文件的基本语义结构。具体地说,TAAS将神经主题建模纳入基于编码脱色调的序列生成程序,通过关注总结,我们学会如何学习和维护全球语义的语义,从而使得对下游系统进行精确化的数学化。我们用了数字化的RAAAA值测试,也用了实际的SOA2模型来证明我们真实的 REA值级标准。