State-of-the-art automatic speech recognition (ASR) system development is data and computation intensive. The optimal design of deep neural networks (DNNs) for these systems often require expert knowledge and empirical evaluation. In this paper, a range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper-parameters of factored time delay neural networks (TDNN-Fs): i) the left and right splicing context offsets; and ii) the dimensionality of the bottleneck linear projection at each hidden layer. These techniques include the differentiable neural architecture search (DARTS) method integrating architecture learning with lattice-free MMI training; Gumbel-Softmax and pipelined DARTS methods reducing the confusion over candidate architectures and improving the generalization of architecture selection; and Penalized DARTS incorporating resource constraints to balance the trade-off between performance and system complexity. Parameter sharing among TDNN-F architectures allows an efficient search over up to 7^28 different systems. Statistically significant word error rate (WER) reductions of up to 1.2% absolute and relative model size reduction of 31% were obtained over a state-of-the-art 300-hour Switchboard corpus trained baseline LF-MMI TDNN-F system featuring speed perturbation, i-Vector and learning hidden unit contribution (LHUC) based speaker adaptation as well as RNNLM rescoring. Performance contrasts on the same task against recent end-to-end systems reported in the literature suggest the best NAS auto-configured system achieves state-of-the-art WERs of 9.9% and 11.1% on the NIST Hub5' 00 and Rt03s test sets respectively with up to 96% model size reduction. Further analysis using Bayesian learning shows that ...
翻译:高级自动语音识别(ASR)系统开发需要大量数据和计算。这些系统深神经网络的最佳设计往往需要专家知识和经验评估。在本文中,使用一系列神经结构搜索技术自动学习两种高参数的因数时间延迟神经网络(TDNNN-Fs):i)左侧和右括号环境抵消;和ii)每个隐藏层的瓶颈线性投影的维度。这些技术包括:不同神经结构搜索(DARTS)方法,将建筑学习与无拉蒂的MMI培训相结合;Gumbel-Softmax和编审的DARTS方法,以降低候选人结构的混乱程度,改进结构选择的一般化;CARTS包含资源限制,以平衡性能和系统复杂度之间的交易。TDNNF 语言模型的共享使得在7°28不同的系统中可以高效搜索。统计上显著的字误差率(WER),根据已培训的 RNF IM-M 绝对值, IM-x-x-x-x-xxxx IM-x IM-x IM- Stal IMF 测试系统的最近的成绩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩。