We introduce Noise2Music, where a series of diffusion models is trained to generate high-quality 30-second music clips from text prompts. Two types of diffusion models, a generator model, which generates an intermediate representation conditioned on text, and a cascader model, which generates high-fidelity audio conditioned on the intermediate representation and possibly the text, are trained and utilized in succession to generate high-fidelity music. We explore two options for the intermediate representation, one using a spectrogram and the other using audio with lower fidelity. We find that the generated audio is not only able to faithfully reflect key elements of the text prompt such as genre, tempo, instruments, mood, and era, but goes beyond to ground fine-grained semantics of the prompt. Pretrained large language models play a key role in this story -- they are used to generate paired text for the audio of the training set and to extract embeddings of the text prompts ingested by the diffusion models. Generated examples: https://google-research.github.io/noise2music
翻译:我们引入Noise2音乐, 培养一系列传播模型, 从文本提示中产生高质量的30秒音乐剪辑。 两种类型的传播模型, 一种发电机模型, 产生以文本为条件的中间表示, 一种级联模型, 产生以中间表示为条件的高不忠音频, 可能还有文本, 受过培训和连续使用, 产生高不忠的音乐。 我们为中间表示探索两种选项, 一种使用光谱, 另一种使用低忠诚的音频。 我们发现, 生成的音频不仅能够忠实反映文本提示的关键内容, 如流体、 节奏、 乐器、 情绪和时代, 而且还超越了提示的地底微细的语义。 大语言模型在这个故事中扮演了关键角色 -- 它们用来为培训集的音频生成配对文本, 并提取传播模型所提示的文本的嵌入。 我们发现的例子 : https://google- reearch.github. io/noise2 mus 。