Self-supervised audio representation learning offers an attractive alternative for obtaining generic audio embeddings, capable to be employed into various downstream tasks. Published approaches that consider both audio and words/tags associated with audio do not employ text processing models that are capable to generalize to tags unknown during training. In this work we propose a method for learning audio representations using an audio autoencoder (AAE), a general word embeddings model (WEM), and a multi-head self-attention (MHA) mechanism. MHA attends on the output of the WEM, providing a contextualized representation of the tags associated with the audio, and we align the output of MHA with the output of the encoder of AAE using a contrastive loss. We jointly optimize AAE and MHA and we evaluate the audio representations (i.e. the output of the encoder of AAE) by utilizing them in three different downstream tasks, namely sound, music genre, and music instrument classification. Our results show that employing multi-head self-attention with multiple heads in the tag-based network can induce better learned audio representations.
翻译:在这项工作中,我们提出一种方法来学习音频表达方式,使用音频自动编码器(AAE)、普通词嵌入模型(WEM)和多头自控机制。 MAHA参加WEM的产出,提供与音频有关的标记的上下文代表,我们利用对比性损失将MAHA的输出与AAE编码器的输出相匹配。我们共同优化AAAE和MAHA,我们通过在三个不同的下游任务(即音频、音乐元件和音乐仪器分类)中利用音频表达方式(即AAE编码器的输出)来评估音频表达方式(即,AAAE编码器的输出)。我们的结果显示,使用多头自控和标签式网络多个头的多头可以促进更好的音频表达方式。