We demonstrate that language models pre-trained on codified (discretely-encoded) music audio learn representations that are useful for downstream MIR tasks. Specifically, we explore representations from Jukebox (Dhariwal et al. 2020): a music generation system containing a language model trained on codified audio from 1M songs. To determine if Jukebox's representations contain useful information for MIR, we use them as input features to train shallow models on several MIR tasks. Relative to representations from conventional MIR models which are pre-trained on tagging, we find that using representations from Jukebox as input features yields 30% stronger performance on average across four MIR tasks: tagging, genre classification, emotion recognition, and key detection. For key detection, we observe that representations from Jukebox are considerably stronger than those from models pre-trained on tagging, suggesting that pre-training via codified audio language modeling may address blind spots in conventional approaches. We interpret the strength of Jukebox's representations as evidence that modeling audio instead of tags provides richer representations for MIR.
翻译:我们展示了语言模型在编篡(分解编码)音乐音乐学习演示方面经过预先培训,对下游MIR任务有用。具体地说,我们探索了来自Jukebox(Dhariwal等人,2020年):一个包含1M歌曲的编篡音频培训语言模型的音乐生成系统。要确定Jukebox的表述是否包含对MIR的有用信息,我们用它们作为输入特征培训的浅色模型来培训若干MIR任务的浅色模型。相对于在标记方面经过预先培训的常规MIR模型的演示,我们发现将Jukebox的表述作为输入特征,在四个MIR任务(标记、基因分类、情感识别和关键检测)中平均产生30%的强效。关于关键检测,我们观察到,来自Jukebox的表述比在标记方面经过预先培训的模型的表述要强得多,建议通过编篡改的音频模型进行预培训可以解决常规方法中的盲点。我们把Jukebox的表述强度解释为模拟音而不是标记为MIR提供更丰富的演示的证据。