We introduce SummScreen, a summarization dataset comprised of pairs of TV series transcripts and human written recaps. The dataset provides a challenging testbed for abstractive summarization for several reasons. Plot details are often expressed indirectly in character dialogues and may be scattered across the entirety of the transcript. These details must be found and integrated to form the succinct plot descriptions in the recaps. Also, TV scripts contain content that does not directly pertain to the central plot but rather serves to develop characters or provide comic relief. This information is rarely contained in recaps. Since characters are fundamental to TV series, we also propose two entity-centric evaluation metrics. Empirically, we characterize the dataset by evaluating several methods, including neural models and those based on nearest neighbors. An oracle extractive approach outperforms all benchmarked models according to automatic metrics, showing that the neural models are unable to fully exploit the input transcripts. Human evaluation and qualitative analysis reveal that our non-oracle models are competitive with their oracle counterparts in terms of generating faithful plot events and can benefit from better content selectors. Both oracle and non-oracle models generate unfaithful facts, suggesting future research directions.
翻译:我们引入了SummScreen, 这是一个由电视系列记录和人类书面摘要组成的汇总数据集。 数据集为抽象总结提供了具有挑战性的测试, 具有若干原因。 绘图细节通常在性格对话中间接表达, 并可能分散在整个笔录中。 这些细节必须找到并整合, 以形成摘要中的简洁的图解。 另外, 电视剧本包含的内容并不直接涉及中央图案, 而是用来开发字符或提供漫画解脱。 这些信息很少包含在复盖中。 由于字符是电视系列的基本内容, 我们还提出了两种实体中心评价指标。 简单地说, 我们通过评估几种方法来描述数据集, 包括神经模型和基于近邻的模型。 甲骨文的采掘方法比所有基准模型都符合自动图谱, 表明神经模型无法充分利用输入的笔录。 人类评估和定性分析表明, 我们的非骨架模型在产生忠实的图案片事件方面具有竞争力, 并且能够从更好的内容选择器中受益。 两种或信仰和非骨架模型都显示不可靠的方向。