Many self-supervised speech models, varying in their pre-training objective, input modality, and pre-training data, have been proposed in the last few years. Despite impressive empirical successes on downstream tasks, we still have a limited understanding of the properties encoded by the models and the differences across models. In this work, we examine the intermediate representations for a variety of recent models. Specifically, we measure acoustic, phonetic, and word-level properties encoded in individual layers, using a lightweight analysis tool based on canonical correlation analysis (CCA). We find that these properties evolve across layers differently depending on the model, and the variations relate to the choice of pre-training objective. We further investigate the utility of our analyses for downstream tasks by comparing the property trends with performance on speech recognition and spoken language understanding tasks. We discover that CCA trends provide reliable guidance to choose layers of interest for downstream tasks and that single-layer performance often matches or improves upon using all layers, suggesting implications for more efficient use of pre-trained models.
翻译:尽管在下游任务上取得了令人印象深刻的经验性成功,但我们对各模型所编码的属性和不同模型之间的差别仍然有有限的了解。在这项工作中,我们检查了各种最新模型的中间表述方式。具体地说,我们用一个基于能力相关分析的轻量级分析工具,测量每个层次所编码的声学、语音和字级属性。我们发现,这些属性根据模型的不同而在不同层次上演化不同,差异与培训前目标的选择有关。我们进一步调查了我们对下游任务的分析的效用,将财产趋势与语音识别和口头语言理解任务的业绩进行比较。我们发现,共同国家评估的趋势为选择下游任务感兴趣的层次提供了可靠的指导,单层性能往往与使用所有层次相匹配或改进,并提出了更有效地使用预先培训模式的影响。