End-to-end automatic speech recognition (ASR) can achieve promising performance with large-scale training data. However, it is known that domain mismatch between training and testing data often leads to a degradation of recognition accuracy. In this work, we focus on the unsupervised domain adaptation for ASR and propose CMatch, a Character-level distribution matching method to perform fine-grained adaptation between each character in two domains. First, to obtain labels for the features belonging to each character, we achieve frame-level label assignment using the Connectionist Temporal Classification (CTC) pseudo labels. Then, we match the character-level distributions using Maximum Mean Discrepancy. We train our algorithm using the self-training technique. Experiments on the Libri-Adapt dataset show that our proposed approach achieves 14.39% and 16.50% relative Word Error Rate (WER) reduction on both cross-device and cross-environment ASR. We also comprehensively analyze the different strategies for frame-level label assignment and Transformer adaptations.
翻译:终端到终端自动语音识别( ASR) 可以通过大规模培训数据实现有希望的性能。 但是,众所周知, 培训和测试数据之间的域错配往往导致识别准确性下降。 在这项工作中, 我们侧重于对 ASR 进行不受监督的域适应, 并提议 CMatch, 这是一种在两个域中对每个字符进行细微调整的字符级分布匹配方法。 首先, 为了获得属于每个字符的特性标签, 我们使用连接时间级分类(CTC) 假标签实现框架级标签任务。 然后, 我们用最大平均值差异来匹配字符级分布。 我们用自我培训技术来培训我们的算法。 Libri- Adapt数据集的实验显示, 我们提出的方法在交叉设计和跨环境的ASR上都实现了14.39 % 和 16.50% 相对字错误率的降低。 我们还全面分析了框架级标签分配和变换器适应的不同战略。