Despite their impressive offline results, deep learning models for symbolic music generation are not widely used in live performances due to a deficit of musically meaningful control parameters and a lack of structured musical form in their outputs. To address these issues we introduce LooperGP, a method for steering a Transformer-XL model towards generating loopable musical phrases of a specified number of bars and time signature, enabling a tool for live coding performances. We show that by training LooperGP on a dataset of 93,681 musical loops extracted from the DadaGP dataset, we are able to steer its generative output towards generating 3x as many loopable phrases as our baseline. In a subjective listening test conducted by 31 participants, LooperGP loops achieved positive median ratings in originality, musical coherence and loop smoothness, demonstrating its potential as a performance tool.
翻译:尽管取得了令人印象深刻的离线结果,但象征性音乐制作的深层次学习模式在现场表演中并没有被广泛使用,原因是缺乏具有音乐意义的控制参数,而且其产出中缺乏结构化的音乐形式。为了解决这些问题,我们引入了LooperGP, 这是一种引导变压器-XL模型生成可循环的音乐词句的方法,其中含有一定数量的条条和时间签名,从而能够产生一个实时编码表演的工具。我们通过对LooperGP进行关于从DadaGP数据集中提取的93 681个音乐循环数据集的培训,我们得以引导其基因化输出产生与我们的基线一样多的3x可循环的短语。在由31名参与者进行的主观倾听测试中,LooperGP循环在原创性、音乐一致性和循环光滑度方面获得了积极的中位评分,显示了其作为性能工具的潜力。</s>