Deep Neural Networks (DNNs) have become the de-facto standard in computer vision, as well as in many other pattern recognition tasks. A key drawback of DNNs is that the training phase can be very computationally expensive. Organizations or individuals that cannot afford purchasing state-of-the-art hardware or tapping into cloud-hosted infrastructures may face a long waiting time before the training completes or might not be able to train a model at all. Investigating novel ways to reduce the training time could be a potential solution to alleviate this drawback, and thus enabling more rapid development of new algorithms and models. In this paper, we propose LightLayers, a method for reducing the number of trainable parameters in deep neural networks (DNN). The proposed LightLayers consists of LightDense andLightConv2D layer that are as efficient as regular Conv2D and Dense layers, but uses less parameters. We resort to Matrix Factorization to reduce the complexity of the DNN models resulting into lightweight DNNmodels that require less computational power, without much loss in the accuracy. We have tested LightLayers on MNIST, Fashion MNIST, CI-FAR 10, and CIFAR 100 datasets. Promising results are obtained for MNIST, Fashion MNIST, CIFAR-10 datasets whereas CIFAR 100 shows acceptable performance by using fewer parameters.
翻译:深心神经网络(DNNS)已经成为计算机视野以及许多其他模式识别任务中的脱法标准。 DNNS的主要缺点是,培训阶段的计算费用非常昂贵。无法购买最先进的硬件或钻入云端基础设施的组织或个人可能要等很长时间才能完成培训,或者可能根本无法培训模型。 研究缩短培训时间的新办法可能是缓解这一缺陷的潜在解决办法,从而能够更快地开发新的算法和模型。在本文件中,我们提出了灯光参数,这是减少深神经网络(DNNN)中可训练参数数目的方法。拟议的灯光灯系统由灯光和光光光感知2D层组成,与常规的CON2D和感官层一样有效,但使用较少的参数。我们采用矩阵化来降低DNNFS模型的复杂性,这些模型要求较低的计算能力,而不需要太多的准确性能。我们测试了100RFARISS, 使用10-MIS数据, IMFARS-CFASASim 。