Cross-lingual science journalism generates popular science stories of scientific articles different from the source language for a non-expert audience. Hence, a cross-lingual popular summary must contain the salient content of the input document, and the content should be coherent, comprehensible, and in a local language for the targeted audience. We improve these aspects of cross-lingual summary generation by joint training of two high-level NLP tasks, simplification and cross-lingual summarization. The former task reduces linguistic complexity, and the latter focuses on cross-lingual abstractive summarization. We propose a novel multi-task architecture - SimCSum consisting of one shared encoder and two parallel decoders jointly learning simplification and cross-lingual summarization. We empirically investigate the performance of SimCSum by comparing it with several strong baselines over several evaluation metrics and by human evaluation. Overall, SimCSum demonstrates statistically significant improvements over the state-of-the-art on two non-synthetic cross-lingual scientific datasets. Furthermore, we conduct an in-depth investigation into the linguistic properties of generated summaries and an error analysis.
翻译:跨语言科学新闻生成了科学文章的通俗科学报道,与源语言不同,面向非专业受众。因此,跨语言流行摘要必须包含输入文档的显著内容,并且内容应连贯、可理解,并使用针对目标受众的当地语言。我们通过联合训练两个高级NLP任务,简化和跨语言摘要,在提高跨语言摘要生成这些方面方面进行了改进。前者任务减少语言复杂性,后者任务专注于跨语言抽象摘要。我们提出了一个新颖的多任务架构-SimCSum,包含一个共享编码器和两个并行解码器,共同学习简化和跨语言摘要。我们通过比较几个强基线模型和人类评估来实证检查SimCSum的性能。总体而言,SimCSum在两个非合成的跨语言科学数据集上表现出显著的改善。此外,我们还对生成摘要的语言属性进行了深入研究和错误分析。