Target speech extraction, which extracts the speech of a target speaker in a mixture given auxiliary speaker clues, has recently received increased interest. Various clues have been investigated such as pre-recorded enrollment utterances, direction information, or video of the target speaker. In this paper, we explore the use of speaker activity information as an auxiliary clue for single-channel neural network-based speech extraction. We propose a speaker activity driven speech extraction neural network (ADEnet) and show that it can achieve performance levels competitive with enrollment-based approaches, without the need for pre-recordings. We further demonstrate the potential of the proposed approach for processing meeting-like recordings, where speaker activity obtained from a diarization system is used as a speaker clue for ADEnet. We show that this simple yet practical approach can successfully extract speakers after diarization, which leads to improved ASR performance when using a single microphone, especially in high overlapping conditions, with a relative word error rate reduction of up to 25 %.
翻译:目标语音提取,在配有助听器线索的混合物中抽出目标发言者的演讲内容,最近引起了更多的兴趣。对各种线索进行了调查,例如预先录制的录制录用词句、方向信息或目标发言者的视频。在本文中,我们探索了使用声音活动信息作为单声道神经网络语音提取的辅助线索。我们建议使用一个由声音活动驱动的语音提取神经网络(ADEnet),并表明它能够实现与以注册为基础的方法的性能竞争水平,而无需事先记录。我们进一步展示了处理类似会议录音的拟议方法的潜力,在这一方法中,从一个diariz化系统获得的发言者活动被用作ADEnet的语音提示。我们表明,这种简单而实用的方法可以在diarization后成功地提取发言者,这导致在使用单一麦克风时,特别是在高度重叠的条件下,提高ASR的性能,而相对的字出错率降低到25%。