Recurrent Neural Networks (RNNs) have long been the dominant architecture in sequence-to-sequence learning. RNNs, however, are inherently sequential models that do not allow parallelization of their computations. Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism. In this paper, we propose the SepFormer, a novel RNN-free Transformer-based neural network for speech separation. The SepFormer learns short and long-term dependencies with a multi-scale approach that employs transformers. The proposed model achieves state-of-the-art (SOTA) performance on the standard WSJ0-2/3mix datasets. It reaches an SI-SNRi of 22.3 dB on WSJ0-2mix and an SI-SNRi of 19.5 dB on WSJ0-3mix. The SepFormer inherits the parallelization advantages of Transformers and achieves a competitive performance even when downsampling the encoded representation by a factor of 8. It is thus significantly faster and it is less memory-demanding than the latest speech separation systems with comparable performance.
翻译:经常性神经网络(NealNetworks)长期以来一直是顺序到顺序学习的主要结构。但是,RNNS本质上是不允许其计算平行的顺序模型。变异器正在成为标准RNS的自然替代物,以多头关注机制取代经常性计算。我们在此文件中提议SepFormer,这是一个新的无RNNN免费变异神经网络,用于语音分离。SepFormer学习长、短、长期依赖关系,采用多种规模的方法,使用变异器。提议的模型在标准WSJ0-2/3Mix数据集上实现了最新技术(SOTA)性能,因此在WSJ0-2Mix上达到22.3dB的SI-SNRI,在WSJ0-2Mix上达到19.5dB的SWSJ0-0-3Mix上达到SI-SNRIi。SFormer继承了变异器的平行优势,即使在将编码代表制降为8倍时也取得了竞争性性能。因此,它比最新的语音分离系统要快得多。