Joint optimization of multi-channel front-end and automatic speech recognition (ASR) has attracted much interest. While promising results have been reported for various tasks, past studies on its meeting transcription application were limited to small scale experiments. It is still unclear whether such a joint framework can be beneficial for a more practical setup where a massive amount of single channel training data can be leveraged for building a strong ASR back-end. In this work, we present our investigation on the joint modeling of a mask-based beamformer and Attention-Encoder-Decoder-based ASR in the setting where we have 75k hours of single-channel data and a relatively small amount of real multi-channel data for model training. We explore effective training procedures, including a comparison of simulated and real multi-channel training data. To guide the recognition towards a target speaker and deal with overlapped speech, we also explore various combinations of bias information, such as direction of arrivals and speaker profiles. We propose an effective location bias integration method called deep concatenation for the beamformer network. In our evaluation on various meeting recordings, we show that the proposed framework achieves a substantial word error rate reduction.
翻译:多频道前端和自动语音识别(ASR)的联合优化引起了很大的兴趣。虽然据报告,在各种任务方面都取得了有希望的成果,但以往关于会议抄录应用的研究仅限于小规模实验,仍然不清楚这种联合框架是否有利于更实际的设置,因为可以利用大量单一频道培训数据来建立一个强大的ASR后端。在这项工作中,我们介绍了关于联合模拟基于面具的光束和注意力-Encoder-Decoder-ASR的情况的调查。在我们拥有75千小时单一频道数据和相对较少的用于示范培训的实际多频道数据的情况下,我们探索了有效的培训程序,包括模拟和真正的多频道培训数据的比较。为了指导对目标发言者的承认并处理重叠的演讲,我们还探索了偏见信息的各种组合,如抵达方向和发言者简介。我们提出了一种有效的定位偏差整合方法,即要求对信号网络进行深度组合。我们在对各种会议记录的评价中显示,拟议的框架实现了大幅度的字数差率降低。