Deep neural networks (DNNs) represent the mainstream methodology for supervised speech enhancement, primarily due to their capability to model complex functions using hierarchical representations. However, a recent study revealed that DNNs trained on a single corpus fail to generalize to untrained corpora, especially in low signal-to-noise ratio (SNR) conditions. Developing a noise, speaker, and corpus independent speech enhancement algorithm is essential for real-world applications. In this study, we propose a self-attending recurrent neural network(SARNN) for time-domain speech enhancement to improve cross-corpus generalization. SARNN comprises of recurrent neural networks (RNNs) augmented with self-attention blocks and feedforward blocks. We evaluate SARNN on different corpora with nonstationary noises in low SNR conditions. Experimental results demonstrate that SARNN substantially outperforms competitive approaches to time-domain speech enhancement, such as RNNs and dual-path SARNNs. Additionally, we report an important finding that the two popular approaches to speech enhancement: complex spectral mapping and time-domain enhancement, obtain similar results for RNN and SARNN with large-scale training. We also provide a challenging subset of the test set used in this study for evaluating future algorithms and facilitating direct comparisons.
翻译:深心神经网络(DNNs)是监督语音增强工作的主流方法,这主要是因为他们有能力用等级代表制来模拟复杂功能。然而,最近的一项研究显示,在单一物质条件下受过训练的DNNS未能向未经训练的体型推广,特别是在低信号对噪音比率条件下。开发噪音、扬声器和独立语音增强算法对于现实世界应用来说至关重要。在本研究中,我们提议了一种自上而下的经常性神经网络(SARNN),用于时间性语言增强,以改善跨子体通用。SARNNN是由经常神经网络(RNNS)组成的,以自我注意区块和向前的区块增扩增。我们对不同体型的SARNNNS进行了评估,在低核NR条件下使用非静止噪音。实验结果表明,SARNNN(S)在时间性语言增强工作上大大超越了竞争方法。此外,我们报告一项重要发现,两种流行的语音增强语言增强方法:复杂的光谱测绘和时间持续增强。我们还在对RNNN和SAR系统进行类似的直接测试。