Initiating a quest to unravel the complexities of musical aesthetics through the lens of information dynamics, our study delves into the realm of musical sequence modeling, drawing a parallel between the sequential structured nature of music and natural language. Despite the prevalence of neural network models in MIR, the modeling of symbolic music events as applied to music cognition and music neuroscience has largely relied on statistical models. In this "proof of concept" paper we posit the superiority of neural network models over statistical models for predicting musical events. Specifically, we compare LSTM, Transformer, and GPT models against a widely-used markov model to predict a chord event following a sequence of chords. Utilizing chord sequences from the McGill Billboard dataset, we trained each model to predict the next chord from a given sequence of chords. We found that neural models significantly outperformed statistical ones in our study. Specifically, the LSTM with attention model led with an accuracy of 0.329, followed by Transformer models at 0.321, GPT at 0.301, and standard LSTM at 0.191. Variable Order Markov and Markov trailed behind with accuracies of 0.277 and 0.140, respectively. Encouraged by these results, we extended our investigation to multidimensional modeling, employing a many-to-one LSTM, LSTM with attention, Transformer, and GPT predictors. These models were trained on both chord and melody lines as two-dimensional data using the CoCoPops Billboard dataset, achieving an accuracy of 0.083, 0.312, 0.271, and 0.120, respectively, in predicting the next chord.
翻译:暂无翻译