Modeling the speaker variability is a key challenge for automatic speech recognition (ASR) systems. In this paper, the learning hidden unit contributions (LHUC) based adaptation techniques with compact speaker dependent (SD) parameters are used to facilitate both speaker adaptive training (SAT) and unsupervised test-time speaker adaptation for end-to-end (E2E) lattice-free MMI (LF-MMI) models. An unsupervised model-based adaptation framework is proposed to estimate the SD parameters in E2E paradigm using LF-MMI and cross entropy (CE) criterions. Various regularization methods of the standard LHUC adaptation, e.g., the Bayesian LHUC (BLHUC) adaptation, are systematically investigated to mitigate the risk of overfitting, on E2E LF-MMI CNN-TDNN and CNN-TDNN-BLSTM models. Lattice-based confidence score estimation is used for adaptation data selection to reduce the supervision label uncertainty. Experiments on the 300-hour Switchboard task suggest that applying BLHUC in the proposed unsupervised E2E adaptation framework to byte pair encoding (BPE) based E2E LF-MMI systems consistently outperformed the baseline systems by relative word error rate (WER) reductions up to 10.5% and 14.7% on the NIST Hub5'00 and RT03 evaluation sets, and achieved the best performance in WERs of 9.0% and 9.7%, respectively. These results are comparable to the results of state-of-the-art adapted LF-MMI hybrid systems and adapted Conformer-based E2E systems.
翻译:模拟演讲者的变异性是自动语音识别(ASR)系统的一项关键挑战。在本文中,学习单位贡献(LHUC)的适应技术(LHUC)以精密的语音依赖(SD)参数为基础,学习以隐蔽的单位贡献(LHUC)为基础的适应技术(LHUC),用于促进演讲者适应培训(SAT)和无监督的测试时间演讲者适应终端到终端到终端(E2E)模型(E2E)的测试时间(E2E2E),无标签的无标签MIT-MMI(LF-MITNNN和CNN-TDNNNN-BLSTM 模型)。基于模型的适应度评分框架用于利用LF-MMII和跨气球标准来估计E2标准适应标准(LUC)的各种调整方法,例如,BLHC(BER)的适应方法,BLUC(BE-OME-E-RME),根据IM-RMF系统简化到14:00的简化的系统,这些系统,在E-RBF-LF-LF-LF-LF-LMLM-LM-LF-LF-LMLS-LS-LS-LM-LM-LM-LD-LS-C-LM-LD-LD-LBS-C-C-C-C-C-C-C-C-C-C-C-C-C-LD-LD-LD-C-C-LD-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-LD-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-C-LD-C-C-C-C-C-LD-C-C-C-C-C-C-C-C-LD-L