A strong representation of a target speaker can aid in extracting important information regarding the speaker and detecting the corresponding temporal regions in a multi-speaker conversation. In this study, we propose a neural architecture that simultaneously extracts speaker representations that are consistent with the speaker diarization objective and detects the presence of each speaker frame by frame, regardless of the number of speakers in the conversation. A speaker representation (known as a z-vector) extractor and frame-speaker contextualizer, which is realized by a residual network and processing data in both the temporal and speaker dimensions, are integrated into a unified framework. Testing on the CALLHOME corpus reveals that our model outperforms most methods presented to date. An evaluation in a more challenging case of concurrent speakers ranging from two to seven demonstrates that our model also achieves relative diarization error rate reductions of 26.35% and 6.4% over two typical baselines, namely the traditional x-vector clustering model and attention-based model, respectively.
翻译:目标发言者的强烈代表性有助于在多发言者对话中获取关于发言者的重要信息,并探测相应的时间区域。在本研究报告中,我们提议一个神经结构,同时提取符合发言者分化目标的发言者代表,并按语框检测每个发言者框架的存在,而不论对话中发言者人数多寡。发言者代表(称为z-vector)提取器和讲框架背景化器(通过残余网络和处理时间和语体两个层面的数据实现)被整合到一个统一的框架中。对ACTHOME Corporation的测试表明,我们的模型比迄今提出的大多数方法都好。在涉及2至7名并行发言者的更具挑战性的情况下进行的评价表明,我们的模型在两个典型基线,即传统的X-Vctor集模型和关注模型上,也分别实现了相对分化误差率减少26.35%和6.4%。