Modern multi-document summarization (MDS) methods are based on transformer architectures. They generate state of the art summaries, but lack explainability. We focus on graph-based transformer models for MDS as they gained recent popularity. We aim to improve the explainability of the graph-based MDS by analyzing their attention weights. In a graph-based MDS such as GraphSum, vertices represent the textual units, while the edges form some similarity graph over the units. We compare GraphSum's performance utilizing different textual units, i. e., sentences versus paragraphs, on two news benchmark datasets, namely WikiSum and MultiNews. Our experiments show that paragraph-level representations provide the best summarization performance. Thus, we subsequently focus oAnalysisn analyzing the paragraph-level attention weights of GraphSum's multi-heads and decoding layers in order to improve the explainability of a transformer-based MDS model. As a reference metric, we calculate the ROUGE scores between the input paragraphs and each sentence in the generated summary, which indicate source origin information via text similarity. We observe a high correlation between the attention weights and this reference metric, especially on the the later decoding layers of the transformer architecture. Finally, we investigate if the generated summaries follow a pattern of positional bias by extracting which paragraph provided the most information for each generated summary. Our results show that there is a high correlation between the position in the summary and the source origin.
翻译:现代多文件总和( MDS) 方法以变压器结构为基础。 它们生成了艺术概要状态, 但缺乏解释性。 当MDS最近受到欢迎时, 我们侧重于基于图形的变压器模型。 我们的目标是通过分析以图形为基础的MDS的注意权重, 提高基于图形的MDS的可解释性。 在基于图形的MDS(MDS) 方法中, 垂直代表文本单位, 而边缘则形成一些类似单位的图形。 我们利用两个新闻基准数据集, 即 WikiSum 和 MultiNews 上的不同文本单位, 即句子对段落进行对比。 我们的实验显示基于图形的变压器模型, 显示基于段落的对数性能, 从而显示我们通过类似文本的文本结构, 最终的变压器结构, 显示我们最终的变压力结构的源值, 显示我们最终的变压力结构的源值, 显示我们最终的变压力结构, 显示我们最终的变压力, 显示我们所生成的变压压压式结构 。