Scientific extreme summarization (TLDR) aims to form ultra-short summaries of scientific papers. Previous efforts on curating scientific TLDR datasets failed to scale up due to the heavy human annotation and domain expertise required. In this paper, we propose a simple yet effective approach to automatically extracting TLDR summaries for scientific papers from their citation texts. Based on the proposed approach, we create a new benchmark CiteSum without human annotation, which is around 30 times larger than the previous human-curated dataset SciTLDR. We conduct a comprehensive analysis of CiteSum, examining its data characteristics and establishing strong baselines. We further demonstrate the usefulness of CiteSum by adapting models pre-trained on CiteSum (named CITES) to new tasks and domains with limited supervision. For scientific extreme summarization, CITES outperforms most fully-supervised methods on SciTLDR without any fine-tuning and obtains state-of-the-art results with only 128 examples. For news extreme summarization, CITES achieves significant gains on XSum over its base model (not pre-trained on CiteSum), e.g., +7.2 ROUGE-1 zero-shot performance and state-of-the-art few-shot performance. For news headline generation, CITES performs the best among unsupervised and zero-shot methods on Gigaword.
翻译:在本文件中,我们提出一个简单而有效的办法,从科学论文引文中自动提取科学论文的TRDR摘要。根据拟议办法,我们创建一个没有人文注释的新的CiteSum基准,比以往的人文数据集SciTLDR高出30倍左右。我们全面分析CiteSum,检查其数据特征,建立强有力的基线。我们进一步展示了CiteSum的有用性,把在CiteSum(称为CITES)上预先培训的模型改成监管有限的新任务和新领域。关于科学极端平衡,CITES在SciTLDR上超越最完全超强的方法,不作任何微调,只取得最新技术成果,仅举128个例子。关于极端新闻总结,CITS在XS(称为CITES)前期最佳业绩模型上取得一些重大成绩。