Loops, seamlessly repeatable musical segments, are a cornerstone of modern music production. Contemporary artists often mix and match various sampled or pre-recorded loops based on musical criteria such as rhythm, harmony and timbral texture to create compositions. Taking such criteria into account, we present LoopNet, a feed-forward generative model for creating loops conditioned on intuitive parameters. We leverage Music Information Retrieval (MIR) models as well as a large collection of public loop samples in our study and use the Wave-U-Net architecture to map control parameters to audio. We also evaluate the quality of the generated audio and propose intuitive controls for composers to map the ideas in their minds to an audio loop.
翻译:当代艺术家经常根据节奏、和谐和Timbral纹理等音乐标准混合和匹配各种抽样或预录的循环,以创造组成。考虑到这些标准,我们介绍LoopNet,这是一个以直觉参数为条件创建循环的进化前基因模型。我们利用了音乐信息检索模型以及大量公共循环样本的收集,并使用Wave-U-Net结构将控制参数与音频相匹配。我们还评估了所生成的音频质量,并提议对作曲者进行直觉控制,以绘制他们脑中的想法与音频循环的图谱。