Topic segmentation is important in understanding scientific documents since it can not only provide better readability but also facilitate downstream tasks such as information retrieval and question answering by creating appropriate sections or paragraphs. In the topic segmentation task, topic coherence is critical in predicting segmentation boundaries. Most of the existing models have tried to exploit as many contexts as possible to extract useful topic-related information. However, additional context does not always bring promising results, because the local context between sentences becomes incoherent despite more sentences being supplemented. To alleviate this issue, we propose siamese sentence embedding layers which process two input sentences independently to get appropriate amount of information without being hampered by excessive information. Also, we adopt multi-task learning techniques including Same Topic Prediction (STP), Topic Classification (TC) and Next Sentence Prediction (NSP). When these three classification layers are combined in a multi-task manner, they can make up for each other's limitations, improving performance in all three tasks. We experiment different combinations of the three layers and report how each layer affects other layers in the same combination as well as the overall segmentation performance. The model we proposed achieves the state-of-the-art result in the WikiSection dataset.
翻译:专题分类对于理解科学文件十分重要,因为它不仅可以提供更好的可读性,而且有助于下游任务,例如信息检索和通过创建适当的章节或段落回答问题。在专题分类任务中,专题的一致性对于预测分割界限至关重要。大多数现有模型都试图利用尽可能多的背景来获取有用的专题相关信息。然而,额外的背景并不总是带来有希望的结果,因为尽管补充了更多的刑期,但判决之间的当地背景却变得不连贯。为了缓解这一问题,我们提议对判决进行分解,分解层处理,独立处理两个输入句,以获得适当数量的信息,而不受过度信息的影响。此外,我们采用多任务学习技术,包括同一专题预测、专题分类和下句预测。当这三个分类层以多任务方式结合时,它们可以弥补彼此的局限性,改善所有三项任务的业绩。我们试验三个层次的不同组合,并报告每个层次在同一个组合中如何影响其他层次,以及整体分解性表现。我们提议的模型是在Wartsk-art结果中实现状态。