Initiating a quest to unravel the complexities of musical aesthetics through the lens of information dynamics, our study delves into the realm of musical sequence modeling, drawing a parallel between the sequential structured nature of music and natural language. Despite the prevalence of neural network models in MIR, the modeling of symbolic music events as applied to music cognition and music neuroscience has largely relied on statistical models. In this "proof of concept" paper we posit the superiority of neural network models over statistical models for predicting musical events. Specifically, we compare LSTM, Transformer, and GPT models against a widely-used markov model to predict a chord event following a sequence of chords. Utilizing chord sequences from the McGill Billboard dataset, we trained each model to predict the next chord from a given sequence of chords. We found that neural models significantly outperformed statistical ones in our study. Specifically, the LSTM with attention model led with an accuracy of 0.85, followed by Transformer models at 0.58, Transformer with GPT head at 0.56, and standard LSTM at 0.43. Variable Order Markov and Markov trailed behind with accuracies of 0.31 and 0.23, respectively. Encouraged by these results, we extended our investigation to multidimensional modeling, employing a many-to-one LSTM, LSTM with attention, Transformer, and GPT predictors. These models were trained on both chord and melody lines as two-dimensional data using the CoCoPops Billboard dataset, achieving an accuracy of 0.21, 0.56, 0.39, and 0.24 respectively in predicting the next chord.
翻译:暂无翻译