Text summarization is an approach for identifying important information present within text documents. This computational technique aims to generate shorter versions of the source text, by including only the relevant and salient information present within the source text. In this paper, we propose a novel method to summarize a text document by clustering its contents based on latent topics produced using topic modeling techniques and by generating extractive summaries for each of the identified text clusters. All extractive sub-summaries are later combined to generate a summary for any given source document. We utilize the lesser used and challenging WikiHow dataset in our approach to text summarization. This dataset is unlike the commonly used news datasets which are available for text summarization. The well-known news datasets present their most important information in the first few lines of their source texts, which make their summarization a lesser challenging task when compared to summarizing the WikiHow dataset. Contrary to these news datasets, the documents in the WikiHow dataset are written using a generalized approach and have lesser abstractedness and higher compression ratio, thus proposing a greater challenge to generate summaries. A lot of the current state-of-the-art text summarization techniques tend to eliminate important information present in source documents in the favor of brevity. Our proposed technique aims to capture all the varied information present in source documents. Although the dataset proved challenging, after performing extensive tests within our experimental setup, we have discovered that our model produces encouraging ROUGE results and summaries when compared to the other published extractive and abstractive text summarization models.
翻译:文本总和是确定文本文档中存在的重要信息的一种方法。 这种计算技术旨在生成源文本的较短版本, 仅包括源文本中现有的相关和突出信息。 在本文中, 我们提出一种新的方法, 将文本文件的内容根据使用主题模型技术产生的潜在议题进行分组, 并为每个已识别的文本组生成提取摘要。 所有提取的子摘要后来合并, 为任何源文件生成摘要。 我们使用我们文本总和方法中较少使用和具有挑战性的 Wikikh How 数据集。 这个数据集不同于用于文本总和的常用新闻数据集。 众所周知的新闻数据集在原始文本的前几行中展示了它们最重要的信息, 这使得它们的总和比对所查明的每个文本组进行总结。 与这些新闻数据集相反, 维基How数据集中使用了一种普遍的方法, 其抽象性较低, 压缩率较高, 从而对生成摘要提出了更大的挑战。 将当前精选的精选的精选数据组, 与我们现在的精选的精细度相比, 我们的精细的精细度, 最终的精细的精细度, 我们的精细的精细的精细的精细的精细的计算, 最终的精细的精细的精细的计算, 。