Prompt tuning is a technology that tunes a small set of parameters to steer a pre-trained language model (LM) to directly generate the output for downstream tasks. Recently, prompt tuning has demonstrated its storage and computation efficiency in both natural language processing (NLP) and speech processing fields. These advantages have also revealed prompt tuning as a candidate approach to serving pre-trained LM for multiple tasks in a unified manner. For speech processing, SpeechPrompt shows its high parameter efficiency and competitive performance on a few speech classification tasks. However, whether SpeechPrompt is capable of serving a large number of tasks is unanswered. In this work, we propose SpeechPrompt v2, a prompt tuning framework capable of performing a wide variety of speech classification tasks, covering multiple languages and prosody-related tasks. The experiment result shows that SpeechPrompt v2 achieves performance on par with prior works with less than 0.15M trainable parameters in a unified framework.
翻译:快速调试是一种技术,它能调控一套小型参数,引导经过预先训练的语言模式(LM)直接产生下游任务的产出。最近,快速调试在自然语言处理(NLP)和语言处理(语音处理)领域都显示出其储存和计算效率。这些优势还表明,作为以统一方式为多项任务提供经过训练的LM的候选方法,迅速调试。关于语音处理,Speoprompt显示其高参数效率和少数语言分类任务的竞争性表现。然而,SpeePrompt是否能够为大量任务服务,却得不到答复。在此工作中,我们提议SpeePrompt v2,这是一个快速调控框架,能够执行范围广泛的语言分类任务,涵盖多种语言和与工作有关的任务。实验结果表明,SpeePrompt v2在统一的框架中,在低于0.15M的训练参数的情况下,与先前的工作取得相同业绩。</s>