This work concentrates on reducing the RTF and word error rate of a hybrid HMM-DNN. Our baseline system uses an architecture with TDNN and LSTM layers. We find this architecture particularly useful for lightly reverberated environments. However, these models tend to demand more computation than is desirable. In this work, we explore alternate architectures employing singular value decomposition (SVD) is applied to the TDNN layers to reduce the RTF, as well as to the affine transforms of every LSTM cell. We compare this approach with specifying bottleneck layers similar to those introduced by SVD before training. Additionally, we reduced the search space of the decoding graph to make it a better fit to operate in real-time applications. We report -61.57% relative reduction in RTF and almost 1% relative decrease in WER for our architecture trained on Fisher data along with reverberated versions of this dataset in order to match one of our target test distributions.
翻译:这项工作集中于减少混合的 HMM- DNN 的 RTF 和字错误率。 我们的基线系统使用一个与 TDNNN 和 LSTM 层相类似的结构。 我们发现这个结构对于轻度回旋环境特别有用。 但是, 这些模型往往需要比理想的更多的计算。 在这项工作中, 我们探索使用单值分解( SVD) 的替代结构, 来减少 RMM- DNN 层, 以及每个 LSTM 单元格的折叠变 。 我们比较了这个方法, 并指定了类似于 SVD 培训前引入的瓶颈层 。 此外, 我们减少了解码图的搜索空间, 使之更适合实时应用 。 我们报告 — 61.57%的RTF 相对减少, 近1%的WER 相对减少, 用于我们所培训的渔业数据结构, 以及该数据集的重新转换版本, 以便匹配我们的目标测试分布 。