Speech separation has been shown effective for multi-talker speech recognition. Under the ad hoc microphone array setup where the array consists of spatially distributed asynchronous microphones, additional challenges must be overcome as the geometry and number of microphones are unknown beforehand. Prior studies show, with a spatial-temporalinterleaving structure, neural networks can efficiently utilize the multi-channel signals of the ad hoc array. In this paper, we further extend this approach to continuous speech separation. Several techniques are introduced to enable speech separation for real continuous recordings. First, we apply a transformer-based network for spatio-temporal modeling of the ad hoc array signals. In addition, two methods are proposed to mitigate a speech duplication problem during single talker segments, which seems more severe in the ad hoc array scenarios. One method is device distortion simulation for reducing the acoustic mismatch between simulated training data and real recordings. The other is speaker counting to detect the single speaker segments and merge the output signal channels. Experimental results for AdHoc-LibiCSS, a new dataset consisting of continuous recordings of concatenated LibriSpeech utterances obtained by multiple different devices, show the proposed separation method can significantly improve the ASR accuracy for overlapped speech with little performance degradation for single talker segments.
翻译:语音分离已被显示对多讲者语音识别有效。 在临时麦克风阵列设置下, 阵列由空间分布的无同步麦克风组成, 还必须克服额外的挑战, 因为对麦克风的几何和数量事先不为人知。 先前的研究显示, 有空间时空交错结构, 神经网络可以有效地利用临时阵列的多声道信号。 在本文中, 我们进一步将这一方法扩大到持续语音分离。 引入了几种技术, 以便能够将语音分离用于真实连续录音。 首先, 我们应用了一个基于变压器的网络, 用于对临时阵列信号进行空间分布- 时空建模。 此外, 提议了两种方法, 以缓解单一谈话器段的语音重复问题, 而在临时阵列场景中, 这一问题似乎更为严重。 一种方法是安装扭曲模拟器, 以减少模拟训练数据和真实录音之间的声波错乱。 另一种方法是让发言者计数来检测单声道段, 并合并输出信号频道。 Adoc- LibiCSSS, 的实验结果是新的数据设置,, 包括连续连续记录对等式阵列式阵列式阵列的音频重复, 。