In recent years, there has been an increased popularity in image and speech generation using diffusion models. However, directly generating music waveforms from free-form text prompts is still under-explored. In this paper, we propose the first text-to-waveform music generation model that can receive arbitrary texts using diffusion models. We incorporate the free-form textual prompt as the condition to guide the waveform generation process of diffusion models. To solve the problem of lacking such text-music parallel data, we collect a dataset of text-music pairs from the Internet with weak supervision. Besides, we compare the effect of two prompt formats of conditioning texts (music tags and free-form texts) and prove the superior performance of our method in terms of text-music relevance. We further demonstrate that our generated music in the waveform domain outperforms previous works by a large margin in terms of diversity, quality, and text-music relevance.
翻译:近些年来,使用传播模型制作图像和语音的普及程度有所提高。然而,直接从自由形式文本提示中生成音乐波形的音乐波形仍然未得到充分探讨。在本文中,我们提出了第一个能够通过传播模型获得任意文本的文本到波形音乐生成模型。我们把自由形式文本快速作为指导传播模型波形生成过程的条件。为了解决缺少这种文本-音乐平行数据的问题,我们从互联网上收集了一组文本-音乐配对的数据,但监管薄弱。此外,我们还比较了两种快速的调制文本格式(音乐标签和自由形式文本文本)的效果,并证明我们方法在文本-音乐相关性方面的优异性。我们进一步证明,在波形域制作的音乐在多样性、质量和文本-音乐相关性方面比以往的工作大幅度要好。