The advent of hyper-scale and general-purpose pre-trained models is shifting the paradigm of building task-specific models for target tasks. In the field of audio research, task-agnostic pre-trained models with high transferability and adaptability have achieved state-of-the-art performances through fine-tuning for downstream tasks. Nevertheless, re-training all the parameters of these massive models entails an enormous amount of time and cost, along with a huge carbon footprint. To overcome these limitations, the present study explores and applies efficient transfer learning methods in the audio domain. We also propose an integrated parameter-efficient tuning (IPET) framework by aggregating the embedding prompt (a prompt-based learning approach), and the adapter (an effective transfer learning method). We demonstrate the efficacy of the proposed framework using two backbone pre-trained audio models with different characteristics: the audio spectrogram transformer and wav2vec 2.0. The proposed IPET framework exhibits remarkable performance compared to fine-tuning method with fewer trainable parameters in four downstream tasks: sound event classification, music genre classification, keyword spotting, and speaker verification. Furthermore, the authors identify and analyze the shortcomings of the IPET framework, providing lessons and research directions for parameter efficient tuning in the audio domain.
翻译:在音频研究领域,通过对下游任务进行微调,实现了高可转移性和适应性高、高超和通用预培训模式的最先进表现;然而,对这些大型模型的所有参数进行再培训,需要大量的时间和费用,同时需要巨大的碳足迹;为克服这些限制,本研究报告探索并应用了音频域中高效传输学习方法;我们还提议了一个综合参数高效调(IPET)框架,将即时(基于迅速的学习方法)和适应(一种有效的转让学习方法)合并在一起;我们用两个具有不同特点的骨干预培训音模型,即声光谱变异器和 wav2vec 2.0,展示了拟议框架的效力;拟议的IPET框架与微调方法相比,表现得非常出色,在四种下游任务中可培训的参数较少:健全的事件分类、音乐基因分类、关键字色定位和演讲者核查。此外,我们还利用两个主干前培训的音频测试模型,查明并调整了建议的框架的不足之处。</s>