Attention-based sequence-to-sequence models for automatic speech recognition jointly train an acoustic model, language model, and alignment mechanism. Thus, the language model component is only trained on transcribed audio-text pairs. This leads to the use of shallow fusion with an external language model at inference time. Shallow fusion refers to log-linear interpolation with a separately trained language model at each step of the beam search. In this work, we investigate the behavior of shallow fusion across a range of conditions: different types of language models, different decoding units, and different tasks. On Google Voice Search, we demonstrate that the use of shallow fusion with a neural LM with wordpieces yields a 9.1% relative word error rate reduction (WERR) over our competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring.
翻译:用于自动语音识别的基于注意的顺序到顺序模型,共同培训一个音响模型、语言模型和校正机制。 因此, 语言模型组件只接受转录音频文本配对的培训。 这导致在推论时间使用与外部语言模型的浅相融合。 浅色聚合是指在波束搜索的每个步骤使用单向线内插和单独培训的语言模型。 在这项工作中, 我们调查了浅相融合在一系列条件下的行为: 不同类型的语言模型、 不同的解码单位 和不同的任务 。 在谷歌语音搜索中, 我们证明, 使用浅相结合的线性 LM 与字元相匹配, 将相对的字差率降低9.1%, 超过我们竞争性的以注意为基础的序列到序列模型 。