In spite of the progress in music source separation research, the small amount of publicly-available clean source data remains a constant limiting factor for performance. Thus, recent advances in self-supervised learning present a largely-unexplored opportunity for improving separation models by leveraging unlabelled music data. In this paper, we propose a self-supervised learning framework for music source separation inspired by the HuBERT speech representation model. We first investigate the potential impact of the original HuBERT model by inserting an adapted version of it into the well-known Demucs V2 time-domain separation model architecture. We then propose a time-frequency-domain self-supervised model, Pac-HuBERT (for primitive auditory clustering HuBERT), that we later use in combination with a Res-U-Net decoder for source separation. Pac-HuBERT uses primitive auditory features of music as unsupervised clustering labels to initialize the self-supervised pretraining process using the Free Music Archive (FMA) dataset. The resulting framework achieves better source-to-distortion ratio (SDR) performance on the MusDB18 test set than the original Demucs V2 and Res-U-Net models. We further demonstrate that it can boost performance with small amounts of supervised data. Ultimately, our proposed framework is an effective solution to the challenge of limited clean source data for music source separation.
翻译:虽然音乐源分离研究取得了进展,但公开的干净源数据数量仍然是性能的一个限制因素。因此,最近自监督学习的进展呈现出一个在利用无标签音乐数据改进分离模型方面尚未被广泛探索的机会。本文提出了一种灵感源于HuBERT语音表示模型的自监督音乐源分离框架。我们首先研究了原始HuBERT模型的潜在影响,通过将其适应于著名的Demucs V2时域分离模型架构中进行了模型插入。然后,我们提出了一种时-频域自监督模型Pac-HuBERT(基元听觉聚类HuBERT),我们随后将其与Res-U-Net解码器组合用于源分离。Pac-HuBERT使用音乐的基元听觉特征作为无监督聚类标签,利用Free Music Archive(FMA)数据集初始化自监督预训练过程。最终,我们提出的框架在MusDB18测试集上实现了比原始Demucs V2和Res-U-Net模型更好的源失真比(SDR)性能。我们进一步证明,它可以在小量的监督数据下提高性能。最终,我们提出的框架是解决限制干净源数据存在的音乐源分离挑战的有效解决方案。