Recently proposed pre-trained generation models achieve strong performance on single-document summarization benchmarks. However, most of them are pre-trained with general-purpose objectives and mainly aim to process single document inputs. In this paper, we propose PRIMER, a pre-trained model for multi-document representation with focus on summarization that reduces the need for dataset-specific architectures and large amounts of fine-tuning labeled data. Specifically, we adopt the Longformer architecture with proper input transformation and global attention to fit for multi-document inputs, and we use Gap Sentence Generation objective with a new strategy to select salient sentences for the whole cluster, called Entity Pyramid, to teach the model to select and aggregate information across a cluster of related documents. With extensive experiments on 6 multi-document summarization datasets from 3 different domains on the zero-shot, few-shot, and full-supervised settings, our model, PRIMER, outperforms current state-of-the-art models on most of these settings with large margins. Code and pre-trained models are released at https://github.com/allenai/PRIMER
翻译:最近提出的经过培训的生成模型在单一文件汇总基准上取得了很强的业绩,然而,其中多数是经过一般目的预先培训的,主要目的是处理单一文件投入。在本文件中,我们建议采用一个经过预先培训的多文件代表模式,即PRIMER,这是一个经过初步培训的多文件代表模式,重点是总结,以减少对特定数据集架构和大量微调标签标签数据的需求。具体地说,我们采用具有适当投入转换和全球关注适合多文件投入的长期结构,我们使用差距判决生成目标,并采用新的战略,为整个组群选择突出的句子,称为实体金字塔,教授该模式,以选择和汇总一组相关文件的信息。在零发、少发和完全监督的3个不同领域对6个多文件汇总数据集进行了广泛的实验,我们的模型、磁场、超越了大多数这些环境的当前最先进的模型。在https://github.com/allenai/PRIMER中发布了守则和预先培训的模型。在https://github.com/allenai/PRIMERMER中发布。