Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.
翻译:反向学习在多式联运代表制学习领域表现出了显著的成功。在本文中,我们建议通过一个对比式语言-语言前培训管道,通过将音频数据与自然语言描述结合起来,开发一个音频代表制。为了实现这一目标,我们首先发布大量来自不同数据来源的633,526对音文本的汇编LAION-Audio-630K。第二,我们通过考虑不同的音频编码器和文字编码器,构建了一个对比式语言-语言-语言前培训模式。我们将特征集成机制和关键词到功能增强纳入模型设计,以进一步使模型能够处理不同长度的音频输入并增强性能。第三,我们进行全面实验,以评价我们的模型跨越三个任务:文本到音频检索、零发音频分类和监督音频分类。结果显示,我们的模型在文本到音频检索任务中取得了优异的性能。在音频分类任务中,模型在零镜头设置中达到最新性能性能,并且能够取得与模型在非零镜头设置中与模型结果相近的性能。