Analysing music in the field of machine learning is a very difficult problem with numerous constraints to consider. The nature of audio data, with its very high dimensionality and widely varying scales of structure, is one of the primary reasons why it is so difficult to model. There are many applications of machine learning in music, like the classifying the mood of a piece of music, conditional music generation, or popularity prediction. The goal for this project was to develop a genre-conditional generative model of music based on Mel spectrograms and evaluate its performance by comparing it to existing generative music models that use note-based representations. We initially implemented an autoregressive, RNN-based generative model called MelNet . However, due to its slow speed and low fidelity output, we decided to create a new, fully convolutional architecture that is based on the MelGAN [4] and conditional GAN architectures, called cMelGAN.
翻译:在机器学习领域分析音乐是一个非常困难的问题,需要考虑许多限制因素。音频数据的性质非常高的维度和结构规模差异很大,这是它很难建模的主要原因之一。在音乐领域,机器学习有许多应用,例如将音乐的情绪分类、有条件的音乐生成或流行预测。这个项目的目标是开发一种基于Mel光谱的、以Mel光谱为根据的、具有基因特性的音乐模型,并通过将它与使用笔记式演示的现有基因化音乐模型进行比较来评估其性能。我们最初采用了一种自动递增的、基于RNNN的基因化模型,称为MelNet。然而,由于它速度缓慢且不忠实,我们决定建立一个以MelGAN[4]和有条件的GAN结构为基础的新的、完全革命性的结构,称为CMelGAN。