Lyric interpretations can help people understand songs and their lyrics quickly, and can also make it easier to manage, retrieve and discover songs efficiently from the growing mass of music archives. In this paper we propose BART-fusion, a novel model for generating lyric interpretations from lyrics and music audio that combines a large-scale pre-trained language model with an audio encoder. We employ a cross-modal attention module to incorporate the audio representation into the lyrics representation to help the pre-trained language model understand the song from an audio perspective, while preserving the language model's original generative performance. We also release the Song Interpretation Dataset, a new large-scale dataset for training and evaluating our model. Experimental results show that the additional audio information helps our model to understand words and music better, and to generate precise and fluent interpretations. An additional experiment on cross-modal music retrieval shows that interpretations generated by BART-fusion can also help people retrieve music more accurately than with the original BART.
翻译:流言解释可以帮助人们快速理解歌曲和歌词,还可以使人们更容易从越来越多的音乐档案库中有效地管理、检索和发现歌曲。 在本文中,我们提议了BART-sult,这是从歌词和音乐音频中产生歌词解释的新模式,将大型的预培训语言模型与音调编码器结合起来。我们使用一个跨模式关注模块,将音频表述纳入歌词表述中,以帮助经过培训的语文模型从音频角度理解歌曲,同时保留语言模型的原始发型性能。我们还发布了歌曲解释数据集,这是用于培训和评估模型的新的大规模数据集。实验结果显示,补充音频信息有助于我们的模型更好地了解文字和音乐,并产生准确和流畅的解释。关于跨模式的音乐检索的额外实验显示,由BART-volution产生的解释也能够帮助人们比原始BART更准确地检索音乐。