In this paper, we present an approach for minimizing the computational complexity of trained Convolutional Neural Networks (ConvNet). The idea is to approximate all elements of a given ConvNet and replace the original convolutional filters and parameters (pooling and bias coefficients; and activation function) with efficient approximations capable of extreme reductions in computational complexity. Low-complexity convolution filters are obtained through a binary (zero-one) linear programming scheme based on the Frobenius norm over sets of dyadic rationals. The resulting matrices allow for multiplication-free computations requiring only addition and bit-shifting operations. Such low-complexity structures pave the way for low-power, efficient hardware designs. We applied our approach on three use cases of different complexity: (i) a "light" but efficient ConvNet for face detection (with around 1000 parameters); (ii) another one for hand-written digit classification (with more than 180000 parameters); and (iii) a significantly larger ConvNet: AlexNet with $\approx$1.2 million matrices. We evaluated the overall performance on the respective tasks for different levels of approximations. In all considered applications, very low-complexity approximations have been derived maintaining an almost equal classification performance.
翻译:在本文中,我们提出了一个尽量减少经过培训的革命神经网络计算复杂性的方法(ConvNet),其想法是,将某个ConvNet的所有要素相近,用能够极大地降低计算复杂性的有效近似值(合用和偏差系数;和激活功能)来取代最初的ConvNet的所有要素和参数(合用和偏差系数;激活功能),代之以能够大大降低计算复杂性的有效近似值。低复杂性聚合过滤器是通过基于Frobenius规范的二进制(零一)线性编程办法获得的。由此产生的矩阵使得可以进行仅需要添加和位移操作的不重复式计算。这种低兼容性结构为低功率、高效硬件设计铺平了道路。我们在三种不同复杂的情况下采用了我们的方法:(一) 光速但高效的Convrew 网络,用于面部探测(约1,000个参数);(二) 另一套手写数字分类(超过180 000个参数);和(三)大得多的ConveNet:AlexNet,仅需增加120万美元的平移算。