Deep neural networks have shown excellent prospects in speech separation tasks. However, obtaining good results while keeping a low model complexity remains challenging in real-world applications. In this paper, we provide a bio-inspired efficient encoder-decoder architecture by mimicking the brain's top-down attention, called TDANet, with decreased model complexity without sacrificing performance. The top-down attention in TDANet is extracted by the global attention (GA) module and the cascaded local attention (LA) layers. The GA module takes multi-scale acoustic features as input to extract global attention signal, which then modulates features of different scales by direct top-down connections. The LA layers use features of adjacent layers as input to extract the local attention signal, which is used to modulate the lateral input in a top-down manner. On three benchmark datasets, TDANet consistently achieved competitive separation performance to previous state-of-the-art (SOTA) methods with higher efficiency. Specifically, TDANet's multiply-accumulate operations (MACs) are only 5\% of Sepformer, one of the previous SOTA models, and CPU inference time is only 10\% of Sepformer. In addition, a large-size version of TDANet obtained SOTA results on three datasets, with MACs still only 10\% of Sepformer and the CPU inference time only 24\% of Sepformer.
翻译:深心神经网络在言语分离任务中表现出了极佳的前景。 但是,在现实世界应用中,在保持低模型复杂性的同时取得良好结果,在保持低模型复杂性的同时,获得良好的结果仍然具有挑战性。 在本文中,我们通过模仿大脑自上而下的关注,即TDANet, 其模型复杂性降低,从而模拟大脑自上而下的关注,以降低模式复杂性。TDANet的自上而下的关注通过全球关注模块(GA)模块和本地连锁关注层的升级获得自上而下的关注。GA模块采用多尺度的声学功能作为提取全球关注信号的投入,然后通过直接自上而下的连接调节不同尺度的特征。LA层利用相邻层的特征作为提取当地关注信号的投入,该信号被用于自上而下地调整横向投入。在三个基准数据集中,TDANet一直以更高的竞争分解性表现于以往的状态(SOTA)模块和本地分层(MAC)的倍增累积操作(MAC)中,只有Septrainferent 10级的Sepreferal 和Septra Stefile CAFAS的C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-