The ability to learn universal audio representations that can solve diverse speech, music, and environment tasks can spur many applications that require general sound content understanding. In this work, we introduce a holistic audio representation evaluation suite (HARES) spanning 12 downstream tasks across audio domains and provide a thorough empirical study of recent sound representation learning systems on that benchmark. We discover that previous sound event classification or speech models do not generalize outside of their domains. We observe that more robust audio representations can be learned with the SimCLR objective; however, the model's transferability depends heavily on the model architecture. We find the Slowfast architecture is good at learning rich representations required by different domains, but its performance is affected by the normalization scheme. Based on these findings, we propose a novel normalizer-free Slowfast NFNet and achieve state-of-the-art performance across all domains.
翻译:学习能够解决多种语言、音乐和环境任务的通用音频演示的能力可以刺激许多需要普遍正确内容理解的应用程序。 在这项工作中,我们引入了涵盖跨音域的12个下游任务的综合音频代表评价套件(HARES),对最近关于该基准的健全代表学习系统进行彻底的经验研究。我们发现,以往的音频分类或语音模型并不在其领域之外进行普及。我们发现,根据SimCLR的目标,可以学到更强有力的音频演示;然而,该模型的可转移性在很大程度上取决于模型结构。我们发现, " 慢速 " 结构很好地学习不同领域所需的丰富表述,但其性能受到正常化计划的影响。基于这些发现,我们建议建立一个新型的无标准化者慢速NFNet,并在所有领域实现最先进的业绩。