Most deep learning-based multi-channel speech enhancement methods focus on designing a set of beamforming coefficients to directly filter the low signal-to-noise ratio signals received by microphones, which hinders the performance of these approaches. To handle these problems, this paper designs a causal neural beam filter that fully exploits the spatial-spectral information in the beam domain. Specifically, multiple beams are designed to steer towards all directions using a parameterized super-directive beamformer in the first stage. After that, the neural spatial filter is learned by simultaneously modeling the spatial and spectral discriminability of the speech and the interference, so as to extract the desired speech coarsely in the second stage. Finally, to further suppress the interference components especially at low frequencies, a residual estimation module is adopted to refine the output of the second stage. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art multi-channel methods on the generated multi-channel speech dataset based on the DNS-Challenge dataset.
翻译:为了处理这些问题,本文件设计了一个因果神经束过滤器,充分利用了波束域内的空间光谱信息。具体地说,多光束设计在第一阶段使用参数化超导波束向所有方向方向方向方向方向。随后,神经空间过滤器通过同时模拟语音和干扰的空间和光谱分布性模型来学习,以便在第二阶段以粗略的方式提取所希望的语音。最后,为了进一步抑制干扰部分,特别是在低频率地区,采用了一个残余估计模块来改进第二阶段的输出。实验结果显示,拟议的方法在生成的多声波数据集上超越了许多州级的多声波方法。