Self-supervised pre-training models have been used successfully in several machine learning domains. However, only a tiny amount of work is related to music. In our work, we treat a spectrogram of music as a series of patches and design a self-supervised model that captures the features of these sequential patches: Patchifier, which makes good use of self-supervised learning methods from both NLP and CV domains. We do not use labeled data for the pre-training process, only a subset of the MTAT dataset containing 16k music clips. After pre-training, we apply the model to several downstream tasks. Our model achieves a considerably acceptable result compared to other audio representation models. Meanwhile, our work demonstrates that it makes sense to consider audio as a series of patch segments.
翻译:在几个机器学习领域成功地使用了自监督的训练前模型。然而,只有一小部分工作与音乐有关。我们在工作中把音乐的光谱作为一系列补丁,设计一个自我监督的模式,捕捉这些相继补补丁的特征:补丁,它很好地利用了NLP和CV两个域的自监督学习方法。我们没有在培训前使用标记的数据,只是包含16k音乐剪辑的MTAT数据集的一部分。在培训前,我们将该模型应用于几个下游任务。我们的模式与其他音频表达模型相比,取得了相当可接受的结果。与此同时,我们的工作表明,将音频视为一系列补丁部分是有道理的。