Self-supervised learning (SSL) has proven vital for advancing research in natural language processing (NLP) and computer vision (CV). The paradigm pretrains a shared model on large volumes of unlabeled data and achieves state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the speech processing community lacks a similar setup to systematically explore the paradigm. To bridge this gap, we introduce Speech processing Universal PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. Among multiple usages of the shared model, we especially focus on extracting the representation learned from SSL due to its preferable re-usability. We present a simple framework to solve SUPERB tasks by learning task-specialized lightweight prediction heads on top of the frozen shared model. Our results demonstrate that the framework is promising as SSL representations show competitive generalizability and accessibility across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a benchmark toolkit to fuel the research in representation learning and general speech processing.
翻译:实践证明,自我监督学习(SSL)对于推进自然语言处理(NLP)和计算机愿景(CV)的研究至关重要。 范式先于大量未贴标签数据的共同模式,并尽可能不作调整地完成各种任务。然而,语言处理社区缺乏类似的架构,无法系统地探索这一范式。为了缩小这一差距,我们引入了语音处理通用性能基准(SUPERB)。 SUPERB是衡量一系列语音处理任务的共同模式绩效的主导板,其中含有最低限度的架构变化和标签数据。在共享模式的多种使用中,我们特别侧重于从SSL提取因其更可取的可使用性而从SL中学到的代表性。我们提出了一个简单的框架,通过学习任务专用轻量度预测负责人在冻结的共享模式之上,解决SUPERB的任务。我们的结果表明,该框架很有希望,因为SLPERB的示意是整个SOPERB任务的竞争性通用性和可获取性。我们向SUPERB发布SUPERB,作为领导板的挑战和基准工具包,用以推动代表性和一般语音处理的研究。