Transformers have recently achieved state-of-the-art performance in speech separation. These models, however, are computationally-demanding and require a lot of learnable parameters. This paper explores Transformer-based speech separation with a reduced computational cost. Our main contribution is the development of the Resource-Efficient Separation Transformer (RE-SepFormer), a self-attention-based architecture that reduces the computational burden in two ways. First, it uses non-overlapping blocks in the latent space. Second, it operates on compact latent summaries calculated from each chunk. The RE-SepFormer reaches a competitive performance on the popular WSJ0-2Mix and WHAM! datasets in both causal and non-causal settings. Remarkably, it scales significantly better than the previous Transformer and RNN-based architectures in terms of memory and inference-time, making it more suitable for processing long mixtures.
翻译:最近,变换器在语音分离中取得了最新水平的性能。 但是,这些模型在计算上需求很高,需要大量可学习参数。本文探讨了基于变换器的语音分离,并降低了计算成本。我们的主要贡献是开发了资源高效分离变换器(RE-SepFormer),这是一个以自我注意为基础的结构,它以两种方式减少了计算负担。首先,它使用隐蔽空间中的非重叠块。第二,它使用从每个块中计算出来的紧凑潜在摘要。RE-SepFormer在流行的 WSJ0-2Mix 和 WHAM 中取得了竞争性的性能,在因果和非因果环境中都建立了数据集。值得注意的是,它在记忆和推断时间方面比以前的变换器和 RNN 结构规模要高得多,使得它更适合处理长期的混合物。