As the industry deploys increasingly large and complex neural networks to mobile devices, more pressure is put on the memory and compute resources of those devices. Deep compression, or compression of deep neural network weight matrices, is a technique to stretch resources for such scenarios. Existing compression methods cannot effectively compress models smaller than 1-2% of their original size. We develop a new compression technique, DeepThin, building on existing research in the area of low rank factorization. We identify and break artificial constraints imposed by low rank approximations by combining rank factorization with a reshaping process that adds nonlinearity to the approximation function. We deploy DeepThin as a plug-gable library integrated with TensorFlow that enables users to seamlessly compress models at different granularities. We evaluate DeepThin on two state-of-the-art acoustic models, TFKaldi and DeepSpeech, comparing it to previous compression work (Pruning, HashNet, and Rank Factorization), empirical limit study approaches, and hand-tuned models. For TFKaldi, our DeepThin networks show better word error rates (WER) than competing methods at practically all tested compression rates, achieving an average of 60% relative improvement over rank factorization, 57% over pruning, 23% over hand-tuned same-size networks, and 6% over the computationally expensive HashedNets. For DeepSpeech, DeepThin-compressed networks achieve better test loss than all other compression methods, reaching a 28% better result than rank factorization, 27% better than pruning, 20% better than hand-tuned same-size networks, and 12% better than HashedNets. DeepThin also provide inference performance benefits ranging from 2X to 14X speedups, depending on the compression ratio and platform cache sizes.
翻译:随着该行业向移动装置部署规模越来越大和复杂的神经网络,对这些装置的记忆和计算资源施加了更大的压力。深压缩或压缩深神经网络重量矩阵是一种技术,可以为这种情景扩展资源。现有的压缩方法无法有效地压缩小于最初规模1-2 %的模型。我们开发了一种新的压缩技术,DeepThin,以低级因数化领域的现有研究为基础。我们通过将等级系数化与使近似功能增加非线性化的重塑进程相结合,确定并打破低级近级近端造成的人为限制。我们部署深通作为与Tensor网络整合的可插入图书馆,使用户能够在不同的颗粒上无缝压缩模型。我们用两种最先进的声调音模型来评估深通模型,即TFKaldi和深思丁。我们开发了一种新的压缩技术,将它与以前的压缩工作(Pruning、HashNet和级分级分级分级分级化领域)、实证极限研究方法和手调模型。对于TFTFKalKdi,我们的深通化网络显示更精确的深度的内位误差值图书馆,比THEnderOriformell 比率网络更精确地显示更精确的速率率率率率率率率率(WER ),在20 %的网络上也比实际压率的递增了比20 %的递减率率率率率率率率率率率率率率率率率率率率率率率率率超过25。