The state-of-the-art deep neural networks (DNNs) have been widely applied for various real-world applications, and achieved significant performance for cognitive problems. However, the increment of DNNs' width and depth in architecture results in a huge amount of parameters to challenge the storage and memory cost, limiting to the usage of DNNs on resource-constrained platforms, such as portable devices. By converting redundant models into compact ones, compression technique appears to be a practical solution to reducing the storage and memory consumption. In this paper, we develop a nonlinear tensor ring network (NTRN) in which both fullyconnected and convolutional layers are compressed via tensor ring decomposition. Furthermore, to mitigate the accuracy loss caused by compression, a nonlinear activation function is embedded into the tensor contraction and convolution operations inside the compressed layer. Experimental results demonstrate the effectiveness and superiority of the proposed NTRN for image classification using two basic neural networks, LeNet-5 and VGG-11 on three datasets, viz. MNIST, Fashion MNIST and Cifar-10.
翻译:最先进的深神经网络(DNN)被广泛应用于各种现实世界应用,并取得了认知问题的重大性能。然而,DNN的宽度和深度在建筑结构中的增加导致大量参数质疑存储和记忆成本,仅限于在资源限制的平台上使用DNN,例如便携式装置。通过将冗余模型转换成紧凑模型,压缩技术似乎是减少存储和记忆消耗的一个实际解决办法。在本文中,我们开发了一个非线性高射线环网络(NTRN),其中通过高温环拆解压缩完全连接和卷发层。此外,为了减轻压缩造成的准确损失,压缩层内的电压收缩和卷动操作中嵌入了非线性激活功能。实验结果显示,拟议的NTRN在三个数据集上使用两个基本的神经网络(LeNet-5和VGGG-11),即MNIST、Fashaimon MNIST和Cifar-10进行图像分类的有效性和优越性。