The foundation for the research of summarization in the Czech language was laid by the work of Straka et al. (2018). They published the SumeCzech, a large Czech news-based summarization dataset, and proposed several baseline approaches. However, it is clear from the achieved results that there is a large space for improvement. In our work, we focus on the impact of named entities on the summarization of Czech news articles. First, we annotate SumeCzech with named entities. We propose a new metric ROUGE_NE that measures the overlap of named entities between the true and generated summaries, and we show that it is still challenging for summarization systems to reach a high score in it. We propose an extractive summarization approach Named Entity Density that selects a sentence with the highest ratio between a number of entities and the length of the sentence as the summary of the article. The experiments show that the proposed approach reached results close to the solid baseline in the domain of news articles selecting the first sentence. Moreover, we demonstrate that the selected sentence reflects the style of reports concisely identifying to whom, when, where, and what happened. We propose that such a summary is beneficial in combination with the first sentence of an article in voice applications presenting news articles. We propose two abstractive summarization approaches based on Seq2Seq architecture. The first approach uses the tokens of the article. The second approach has access to the named entity annotations. The experiments show that both approaches exceed state-of-the-art results previously reported by Straka et al. (2018), with the latter achieving slightly better results on SumeCzech's out-of-domain testing set.
翻译:Straka等人(2018年)的工作为捷克语言的总结研究奠定了基础(2018年),他们出版了捷克基于新闻的大型汇总数据集苏美切,这是捷克基于新闻的大规模汇总数据集,并提出了若干基线方法。然而,从取得的成果中可以明显看出,有一个很大的改进空间。在我们的工作中,我们侧重于被点名的实体对捷克新闻文章的汇总的影响。首先,我们向被点名实体说明苏美捷克与被点名实体在选择第一句的新闻文章领域达到的坚实基线。我们提出了一个新的指标ROUGE_NE,以衡量被点名实体在真实和生成的摘要之间稍微重叠的做法。我们表明,对合成系统的系统仍难以达到高分数。我们建议采用采掘式汇总方法,在多个实体之间选择了最高比例的句子。我们建议,在Serrequalal的版本中,以Serqualalalal的版本为基础,我们用Srqial-al-al-al-al-al-al-al-al-sal-sal-sal-sal-sal-sal-sal-sal-he laut lax lax-s-s-s-h-lax-s-s-s-s-s-h-s-s-laxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx