The recent surge in popularity of diffusion models for image generation has brought new attention to the potential of these models in other areas of media synthesis. One area that has yet to be fully explored is the application of diffusion models to music generation. Music generation requires to handle multiple aspects, including the temporal dimension, long-term structure, multiple layers of overlapping sounds, and nuances that only trained listeners can detect. In our work, we investigate the potential of diffusion models for text-conditional music generation. We develop a cascading latent diffusion approach that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. For each model, we make an effort to maintain reasonable inference speed, targeting real-time on a single consumer GPU. In addition to trained models, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: - Music samples for this paper: https://bit.ly/anonymous-mousai - All music samples for all models: https://bit.ly/audio-diffusion - Codes: https://github.com/archinetai/audio-diffusion-pytorch
翻译:最近对图像生成传播模型的流行程度的上升使人们重新注意到这些模型在其他媒体合成领域的潜力。尚未充分探索的一个领域是将传播模型应用于音乐生成;音乐生成需要处理多个方面,包括时间层面、长期结构、多层重叠声音以及只有经过培训的听众才能察觉到的细微问题。我们在工作中调查了文本-有条件音乐生成的传播模型的潜力。我们开发了一种层层叠潜潜的传播方法,可以从文字描述中生成48kHz的多分钟高质量立体音乐。对于每一种模型,我们努力保持合理的推断速度,针对单一消费者GPU实时。除了经过培训的模型外,我们还收集了开放源图书馆,希望为今后实地工作提供便利。我们公开提供以下文件的音乐样本:https://bit.ly/onnymous-mosai-所有模型:https://bit.ly/audio-difivil - Code: https://gistrif-prif-dio.comstine: