Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization. While recently released datasets, such as QMSum or AQuaMuSe, facilitate research efforts in QFS, the field lacks a comprehensive study of the broad space of applicable modeling methods. In this paper we conduct a systematic exploration of neural approaches to QFS, considering two general classes of methods: two-stage extractive-abstractive solutions and end-to-end models. Within those categories, we investigate existing models and explore strategies for transfer learning. We also present two modeling extensions that achieve state-of-the-art performance on the QMSum dataset, up to a margin of 3.38 ROUGE-1, 3.72 ROUGE2, and 3.28 ROUGE-L when combined with transfer learning strategies. Results from human evaluation suggest that the best models produce more comprehensive and factually consistent summaries compared to a baseline model. Code and checkpoints are made publicly available: https://github.com/salesforce/query-focused-sum.
翻译:以查询为重点的总结(QFS)旨在编写摘要,回答特别感兴趣的问题,使用户控制和个人化更加有力;虽然最近公布的数据集,如QMSum或AQuaMuSe,有助于QFS的研究工作,但实地缺乏对适用模型方法广泛空间的全面研究;在本文件中,我们系统地探索QFS的神经方法,考虑到两种一般方法:两阶段的采掘-吸附性解决办法和端至端模式;在这些类别中,我们调查现有的模型和探索转让学习战略;我们还提供两个模型扩展,在QMSum数据集上达到最新性能,最高幅度为3.38ROUGE-1、3.72ROUGE2和3.28ROUGE-L,与转让学习战略相结合;人类评价的结果表明,最佳模型产生比基线模型更全面、更符合事实的概要;守则和检查站可以公开查阅:https://github.com/salesforce/query-重点文件。