The availability of large-scale datasets has driven the development of neural models that create generic summaries from single or multiple documents. In this work we consider query focused summarization (QFS), a task for which training data in the form of queries, documents, and summaries is not readily available. We propose to decompose QFS into (1) query modeling (i.e., finding supportive evidence within a set of documents for a query) and (2) conditional language modeling (i.e., summary generation). We introduce MaRGE, a Masked ROUGE Regression framework for evidence estimation and ranking which relies on a unified representation for summaries and queries, so that summaries in generic data can be converted into proxy queries for learning a query model. Experiments across QFS benchmarks and query types show that our model achieves state-of-the-art performance despite learning from weak supervision.
翻译:大规模数据集的可用性推动了神经模型的发展,这些神经模型从单一或多个文件中生成了通用摘要。在这项工作中,我们考虑了有重点的查询总结(QFS),对于这一任务,以查询、文件和摘要形式提供的培训数据不容易获得。我们提议将QFS分解为:(1) 查询模型(即在查询的一组文件中找到支持性证据)和(2) 有条件的语言模型(即摘要生成)。我们引入了用于证据估计和排序的Marrge ROUGE回溯性框架(Masked ROUGE Revigion 框架),这一框架依赖于对摘要和查询的统一表述,因此,通用数据摘要可以转换为代理查询,以学习查询模型。跨QFS基准和查询类型的实验表明,尽管从薄弱的监督中学习,但我们的模式取得了最先进的业绩。