Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization. While recently released datasets, such as QMSum or AQuaMuSe, facilitate research efforts in QFS, the field lacks a comprehensive study of the broad space of applicable modeling methods. In this paper we conduct a systematic exploration of neural approaches to QFS, considering two general classes of methods: two-stage extractive-abstractive solutions and end-to-end models. Within those categories, we investigate existing methods and present two model extensions that achieve state-of-the-art performance on the QMSum dataset by a margin of up to 3.38 ROUGE-1, 3.72 ROUGE-2, and 3.28 ROUGE-L. Through quantitative experiments we highlight the trade-offs between different model configurations and explore the transfer abilities between summarization tasks. Code and checkpoints are made publicly available: https://github.com/salesforce/query-focused-sum.
翻译:以查询为焦点的汇总(QFS)旨在编写摘要,回答特别感兴趣的问题,使用户控制和个人化更加有力;虽然最近公布的数据集,如QMSum或AQuaMuSe,有助于QFS的研究工作,但外地缺乏对适用模型方法广泛空间的全面研究;在本文件中,我们系统地探索QFS的神经方法,考虑到两种一般方法:两阶段的采掘-吸附性解决办法和终端到终端模型;在这些类别中,我们调查现有方法,并提出两个模型扩展,在QMSUGE-1、3.72 ROUGE-2和3.28 ROUGE-L的边距范围内实现QMSum数据集的最新性能。我们通过定量实验,强调不同模型组合之间的权衡,并探索在交配对任务之间的转移能力。代码和检查站公布于https://github.com/saliforce/query-ocleacle-sum。