This paper presents our recent effort on end-to-end speaker-attributed automatic speech recognition, which jointly performs speaker counting, speech recognition and speaker identification for monaural multi-talker audio. Firstly, we thoroughly update the model architecture that was previously designed based on a long short-term memory (LSTM)-based attention encoder decoder by applying transformer architectures. Secondly, we propose a speaker deduplication mechanism to reduce speaker identification errors in highly overlapped regions. Experimental results on the LibriSpeechMix dataset shows that the transformer-based architecture is especially good at counting the speakers and that the proposed model reduces the speaker-attributed word error rate by 47% over the LSTM-based baseline. Furthermore, for the LibriCSS dataset, which consists of real recordings of overlapped speech, the proposed model achieves concatenated minimum-permutation word error rates of 11.9% and 16.3% with and without target speaker profiles, respectively, both of which are the state-of-the-art results for LibriCSS with the monaural setting.
翻译:本文介绍了我们最近在终端到终端语音自动语音识别方面的努力,该选项共同对音量计数、语音识别和音量辨别进行声调多讲者音频的语音识别。 首先,我们通过应用变压器结构,彻底更新以前根据长期短期内存(LSTM)的注意编码器解码器而设计的模型结构。第二,我们建议使用一个扩压器解调机制,以减少高度重叠区域的语音识别错误。LibriSpeechMix数据集的实验结果表明,基于变压器的架构在计数发言者方面特别出色,而且拟议的模型将发言者致词错误率比基于LSTM的基线减少47%。此外,对于包含重叠语音真实记录的LibriCSS数据集,拟议的模型可以分别将11.9%和16.3%的最低变换字错误率与目标扬声器剖面图相匹配,两者都是以寺界设置为LibriCSS的状态结果。