This paper describes our speaker diarization system submitted to the Multi-channel Multi-party Meeting Transcription (M2MeT) challenge, where Mandarin meeting data were recorded in multi-channel format for diarization and automatic speech recognition (ASR) tasks. In these meeting scenarios, the uncertainty of the speaker number and the high ratio of overlapped speech present great challenges for diarization. Based on the assumption that there is valuable complementary information between acoustic features, spatial-related and speaker-related features, we propose a multi-level feature fusion mechanism based target-speaker voice activity detection (FFM-TS-VAD) system to improve the performance of the conventional TS-VAD system. Furthermore, we propose a data augmentation method during training to improve the system robustness when the angular difference between two speakers is relatively small. We provide comparisons for different sub-systems we used in M2MeT challenge. Our submission is a fusion of several sub-systems and ranks second in the diarization task.
翻译:本文介绍了我们向多渠道多方会议分流(M2Met)多频道多端会议分解系统提交的演讲人分解系统,其中以多频道格式记录了国语会议数据,用于分解和自动语音识别(ASR)任务;在这些会议设想中,发言者人数的不确定性和重叠发言的高比例对分化提出了巨大挑战;基于在声学特征、空间相关和发言者相关特征之间存在宝贵互补信息这一假设,我们提议采用基于目标发言人语音活动检测(FFM-TS-VAD)的多级特征集成机制系统,以改进传统TS-VAD系统的工作;此外,我们提议在培训期间采用数据增强方法,以便在两个发言者之间的角差相对较小时提高系统稳健性;我们比较了我们在M2MET挑战中使用的不同分系统。我们提交的材料是几个子系统的聚合,在二分化任务中排第二位。