Deep learning-based music source separation has gained a lot of interest in the last decades. Most of the existing methods operate with either spectrograms or waveforms. Spectrogram based models learn suitable masks for separating magnitude spectrogram into different sources, and waveform-based models directly generate waveforms of individual sources. The two types of models have complementary strengths; the former is superior given harmonic sources such as vocals, while the latter demonstrates better results for percussion and bass instruments. In this work, we improved upon the state-of-the-art (SoTA) models and successfully combined the best of both worlds. The backbones of the proposed framework, dubbed Danna-Sep, are two spectrogram-based models including a modified X-UMX and U-Net, and an enhanced Demucs as the waveform-based model. Given an input of mixture, we linearly combined respective outputs from the three models to obtain the final result. We showed in the experiments that, despite its simplicity, Danna-Sep surpassed the SoTA models by a large margin in terms of Source-to-Distortion Ratio.
翻译:在过去几十年里,基于深层学习的音乐源的分离引起了很大的兴趣。大多数现有方法都是用光谱或波形来操作的。基于光谱的模型学会了将星度光谱分离成不同源的合适掩码,而基于波形的模型直接产生单个源的波形。两种模型具有互补的优势;前者由于声音等和谐源而优于前者,而后者则显示震动和贝斯仪器的更好结果。在这项工作中,我们改进了最先进的模型(SoTA),成功地结合了两个世界的最佳模型。拟议框架的支柱,称为Danna-Sep,是两个基于光谱的模型,包括一个修改过的X-UMX和U-Net,以及一个强化的波形模型,作为波形模型。根据混合物的输入,我们将三个模型各自的产出线性地合并,以获得最终结果。我们在实验中显示,尽管它很简单,但Danna-Sep在源到磁盘上比SoTA模型高出一个很大的空间。