The focus of this paper is the application of classical model order reduction techniques, such as Active Subspaces and Proper Orthogonal Decomposition, to Deep Neural Networks. We propose a generic methodology to reduce the number of layers of a pre-trained network by combining the aforementioned techniques for dimensionality reduction with input-output mappings, such as Polynomial Chaos Expansion and Feedforward Neural Networks. The necessity of compressing the architecture of an existing Convolutional Neural Network is motivated by its application in embedded systems with specific storage constraints. Our experiment shows that the reduced nets obtained can achieve a level of accuracy similar to the original Convolutional Neural Network under examination, while saving in memory allocation.
翻译:本文的重点是对深神经网络应用典型的减少秩序示范技术,如活动子空间和正正正正的孔径分解等,我们提出一种通用方法,以减少预先训练的网络层数,将上述减少维度技术与输入-输出图绘制相结合,如多元性混亂扩展和进化向向神经网络;压缩现有革命神经网络结构的必要性,是因为它应用于内嵌的系统,并有具体的储存限制;我们的实验表明,减少的蚊帐可以达到类似于正在接受检查的原始进化神经网络的准确程度,同时可以节省记忆分配。