The task of topical segmentation is well studied, but previous work has mostly addressed it in the context of structured, well-defined segments, such as segmentation into paragraphs, chapters, or segmenting text that originated from multiple sources. We tackle the task of segmenting running (spoken) narratives, which poses hitherto unaddressed challenges. As a test case, we address Holocaust survivor testimonies, given in English. Other than the importance of studying these testimonies for Holocaust research, we argue that they provide an interesting test case for topical segmentation, due to their unstructured surface level, relative abundance (tens of thousands of such testimonies were collected), and the relatively confined domain that they cover. We hypothesize that boundary points between segments correspond to low mutual information between the sentences proceeding and following the boundary. Based on this hypothesis, we explore a range of algorithmic approaches to the task, building on previous work on segmentation that uses generative Bayesian modeling and state-of-the-art neural machinery. Compared to manually annotated references, we find that the developed approaches show considerable improvements over previous work.
翻译:对专题分解任务进行了周密的研究,但以往的工作大多是在结构化、定义明确的部分,如段落、章节或来自多种来源的分解文本的背景下处理的。我们处理的是分解(口述)叙述的任务,这构成了迄今尚未解决的挑战。作为一个试验案例,我们处理的是大屠杀幸存者以英语提供的证词。除了研究这些证词对大屠杀研究的重要性外,我们争辩说,这些部分提供了一个有趣的分解试验案例,因为它们没有结构化的表面水平、相对丰富的(收集了成千上万份此类证词)以及它们覆盖的相对有限的领域。我们假设,各部分之间的分界线与正在审理的句子和边界之后的相互信息相对较少。我们根据这一假设,探索了这项任务的一系列算法方法,在以前关于使用古典化的海湾模型和状态神经机械的分解工作的基础上,我们发现,与手动附加说明的参考相比,所制定的办法比以往的工作有相当大的改进。