The limited size of existing query-focused summarization datasets renders training data-driven summarization models challenging. Meanwhile, the manual construction of a query-focused summarization corpus is costly and time-consuming. In this paper, we use Wikipedia to automatically collect a large query-focused summarization dataset (named WIKIREF) of more than 280, 000 examples, which can serve as a means of data augmentation. We also develop a BERT-based query-focused summarization model (Q-BERT) to extract sentences from the documents as summaries. To better adapt a huge model containing millions of parameters to tiny benchmarks, we identify and fine-tune only a sparse subnetwork, which corresponds to a small fraction of the whole model parameters. Experimental results on three DUC benchmarks show that the model pre-trained on WIKIREF has already achieved reasonable performance. After fine-tuning on the specific benchmark datasets, the model with data augmentation outperforms strong comparison systems. Moreover, both our proposed Q-BERT model and subnetwork fine-tuning further improve the model performance. The dataset is publicly available at https://aka.ms/wikiref.
翻译:现有的以查询为焦点的汇总数据集的有限规模使得培训数据驱动的汇总模型具有挑战性。与此同时,人工构建一个以查询为焦点的汇总组合模型既费钱又费时。在本文中,我们使用维基百科自动收集一个大型以查询为焦点的汇总数据集(名为WIKIREF),该数据集有280 000多个实例,可以作为数据增强的手段。我们还开发了一个基于BERT的以查询为焦点的汇总模型(Q-BERT),以便从文件中提取句子,作为摘要。为了更好地调整一个包含数百万参数的巨大模型,以适应微小的基准,我们只识别和微调一个稀薄的子网络,与整个模型参数的一小部分相对应。三个DUC基准的实验结果显示,在WIKIREF上预先培训的模型已经取得了合理的性能。在对具体的基准数据集进行微调后,数据增强型模型将形成强有力的比较系统。此外,我们提议的Q-BERT模型和子网络微调改进了模型,以进一步改进模型的性能改善整个模型的性能。