Recently proposed self-supervised learning approaches have been successful for pre-training speech representation models. The utility of these learned representations has been observed empirically, but not much has been studied about the type or extent of information encoded in the pre-trained representations themselves. Developing such insights can help understand the capabilities and limits of these models and enable the research community to more efficiently develop their usage for downstream applications. In this work, we begin to fill this gap by examining one recent and successful pre-trained model (wav2vec 2.0), via its intermediate representation vectors, using a suite of analysis tools. We use the metrics of canonical correlation, mutual information, and performance on simple downstream tasks with non-parametric probes, in order to (i) query for acoustic and linguistic information content, (ii) characterize the evolution of information across model layers, and (iii) understand how fine-tuning the model for automatic speech recognition (ASR) affects these observations. Our findings motivate modifying the fine-tuning protocol for ASR, which produces improved word error rates in a low-resource setting.
翻译:最近提出的自我监督学习方法在培训前演讲代表模式方面是成功的,这些学习的表述方法在培训前演讲代表模式方面是成功的,从经验上观察到了这些学习的表达方式的效用,但对于在培训前陈述中编码的信息的类型或范围研究得不多。发展这些洞察力有助于理解这些模型的能力和局限性,使研究界能够更有效地发展这些模型在下游应用中的用途。在这项工作中,我们开始通过使用一套分析工具,通过经过培训的中级媒介媒介媒介,研究一个最新的和成功的预先培训的模式(wav2vec 2.0),以填补这一空白。我们使用关于卡门相关性、相互信息以及用非参数勘测器完成简单的下游任务的业绩的衡量标准,以便(一) 查询声学和语言信息内容,(二) 描述跨模式层信息的发展过程,(三) 了解自动语音识别模式的微调如何影响这些观察结果。我们的调查结果鼓励修改ASR的微调制程序,该程序在低资源环境下产生改进的字错率率。