State of the art time automatic speech recognition (ASR) systems are becoming increasingly complex and expensive for practical applications. This paper presents the development of a high performance and low-footprint 4-bit quantized LF-MMI trained factored time delay neural networks (TDNNs) based ASR system on the 300-hr Switchboard corpus. A key feature of the overall system design is to account for the fine-grained, varying performance sensitivity at different model components to quantization errors. To this end, a set of neural architectural compression and mixed precision quantization approaches were used to facilitate hidden layer level auto-configuration of optimal factored TDNN weight matrix subspace dimensionality and quantization bit-widths. The proposed techniques were also used to produce 2-bit mixed precision quantized Transformer language models. Experiments conducted on the Switchboard data suggest that the proposed neural architectural compression and mixed precision quantization techniques consistently outperform the uniform precision quantised baseline systems of comparable bit-widths in terms of word error rate (WER). An overall "lossless" compression ratio of 13.6 was obtained over the baseline full precision system including both the TDNN and Transformer components while incurring no statistically significant WER increase.
翻译:本文件介绍了高性能和低脚四位四位数的LF-MMI培训的有系数延迟时间神经网络系统(TDNNS)的开发情况。整个系统设计的一个关键特征是,将不同模型组件的细微和不同性能敏感度与量化错误联系起来。为此,采用了一套神经结构压缩和混合精确度量化方法,以促进隐藏层层级的自动配置,优化有系数的TDNNN加权矩阵次空间维度和四分位化位元宽度。拟议的技术还用于生成2位混合精密四分制变异语言模型。在开口板数据上进行的实验表明,拟议的神经结构压缩和混合精度四分解技术始终超越了单词错误率(WER)中可比较的比位精度基准系统。总体的“无损失”压缩系统(包括13.6的统计精确度),没有在13.6的完全基线上取得新的统计精确度,包括13.6的统计精确度的统计精确度,没有在13.6的精确度上大幅度的精确度。