Recent music generation methods based on transformers have a context window of up to a minute. The music generated by these methods are largely unstructured beyond the context window. With a longer context window, learning long scale structures from musical data is a prohibitively challenging problem. This paper proposes integrating a text-to-music model with a large language model to generate music with form. We discuss our solutions to the challenges of such integration. The experimental results show that the proposed method can generate 2.5-minute-long music that is highly structured, strongly organized, and cohesive.
翻译:暂无翻译