This paper introduces MauBERT, a multilingual extension of HuBERT that leverages articulatory features for robust cross-lingual phonetic representation learning. We continue HuBERT pre-training with supervision based on a phonetic-to-articulatory feature mapping in 55 languages. Our models learn from multilingual data to predict articulatory features or phones, resulting in language-independent representations that capture multilingual phonetic properties. Through comprehensive ABX discriminability testing, we show MauBERT models produce more context-invariant representations than state-of-the-art multilingual self-supervised learning models. Additionally, the models effectively adapt to unseen languages and casual speech with minimal self-supervised fine-tuning (10 hours of speech). This establishes an effective approach for instilling linguistic inductive biases in self-supervised speech models.
翻译:本文介绍MauBERT,这是HuBERT的多语言扩展版本,利用发音特征实现鲁棒的跨语言语音表征学习。我们在55种语言中基于语音-发音特征映射的监督下继续HuBERT的预训练。我们的模型通过多语言数据学习预测发音特征或音素,从而获得能捕捉多语言语音特性的语言无关表征。通过全面的ABX可区分性测试,我们证明MauBERT模型比最先进的多语言自监督学习模型能产生更具上下文不变性的表征。此外,该模型通过极少量自监督微调(10小时语音数据)即可有效适应未见语言和随意语音。这为自监督语音模型注入语言归纳偏置建立了有效途径。