Continuous speech separation (CSS) aims to separate overlapping voices from a continuous influx of conversational audio containing an unknown number of utterances spoken by an unknown number of speakers. A common application scenario is transcribing a meeting conversation recorded by a microphone array. Prior studies explored various deep learning models for time-frequency mask estimation, followed by a minimum variance distortionless response (MVDR) filter to improve the automatic speech recognition (ASR) accuracy. The performance of these methods is fundamentally upper-bounded by MVDR's spatial selectivity. Recently, the all deep learning MVDR (ADL-MVDR) model was proposed for neural beamforming and demonstrated superior performance in a target speech extraction task using pre-segmented input. In this paper, we further adapt ADL-MVDR to the CSS task with several enhancements to enable end-to-end neural beamforming. The proposed system achieves significant word error rate reduction over a baseline spectral masking system on the LibriCSS dataset. Moreover, the proposed neural beamformer is shown to be comparable to a state-of-the-art MVDR-based system in real meeting transcription tasks, including AMI, while showing potentials to further simplify the runtime implementation and reduce the system latency with frame-wise processing.
翻译:持续语音分离(CSS)的目的是将重叠的声音与不断涌现的谈话声频隔开,这些声音含有数目不详的发言者所讲的语句数量不详。一个共同的应用设想是用麦克风阵列记录一个会议谈话内容。以前的研究探索了用于时间频率掩码估计的各种深层次学习模式,随后是最低变异无变反应过滤器,以提高自动语音识别(ASR)的准确性。这些方法的性能由于MVDR的空间选择性而基本处于高度限制状态。最近,所有深度学习的MVDR(ADL-MVDR)模型(ADL-MVDR)模型是为神经成型而提出的,并展示出在目标语音提取任务中,使用预嵌入式输入的输入来记录。在本文中,我们进一步将ADL-MVDR(DR)模型调整为CS(DR)系统的任务,通过若干改进使终端至终端神经调节。拟议系统在LibricCSS数据集上实现显著的字差差率降低。此外,同时显示正在操作的系统,包括简化式记录。