Deep Learning models have shown very promising results in automatically composing polyphonic music pieces. However, it is very hard to control such models in order to guide the compositions towards a desired goal. We are interested in controlling a model to automatically generate music with a given sentiment. This paper presents a generative Deep Learning model that can be directed to compose music with a given sentiment. Besides music generation, the same model can be used for sentiment analysis of symbolic music. We evaluate the accuracy of the model in classifying sentiment of symbolic music using a new dataset of video game soundtracks. Results show that our model is able to obtain good prediction accuracy. A user study shows that human subjects agreed that the generated music has the intended sentiment, however negative pieces can be ambiguous.
翻译:深层学习模型在自动合成多声乐片方面显示了非常有希望的结果。 然而, 很难控制这些模型, 以引导组成实现预期目标。 我们有兴趣控制一个模式, 以某种情绪自动生成音乐。 本文展示了一个基因化的深学习模型, 可以用来以某种情绪合成音乐。 除了音乐生成, 同样的模型也可以用于对象征性音乐进行情感分析。 我们用新的视频游戏音轨数据集来评估象征性音乐情绪分类模型的准确性。 结果显示, 我们的模型能够获得良好的预测准确性。 用户研究表明, 人类的主体同意, 所生成的音乐具有预想的情绪, 无论负面的片段如何模糊不清 。