Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption. Unfortunately, there is an accuracy drop when replacing all convolution filters by adder filters. The main reason here is the optimization difficulty of ANNs using $\ell_1$-norm, in which the estimation of gradient in back propagation is inaccurate. In this paper, we present a novel method for further improving the performance of ANNs without increasing the trainable parameters via a progressive kernel based knowledge distillation (PKKD) method. A convolutional neural network (CNN) with the same architecture is simultaneously initialized and trained as a teacher network, features and weights of ANN and CNN will be transformed to a new space to eliminate the accuracy drop. The similarity is conducted in a higher-dimensional space to disentangle the difference of their distributions using a kernel based method. Finally, the desired ANN is learned based on the information from both the ground-truth and teacher, progressively. The effectiveness of the proposed method for learning ANN with higher performance is then well-verified on several benchmarks. For instance, the ANN-50 trained using the proposed PKKD method obtains a 76.8\% top-1 accuracy on ImageNet dataset, which is 0.6\% higher than that of the ResNet-50.
翻译:添加神经网络(ANNS)仅包含附加内容,这给我们带来了一种开发低能消耗的深神经网络的新方法。 不幸的是,在用添加过滤器取代所有卷进过滤器时,精确度下降。这里的主要原因是,使用$@ell_1$-norm 来优化ANNS的难度,在后传播过程中对梯度的估计不准确。在本文中,我们提出了一个新方法,用以进一步改进ANNS的性能,而不会通过基于知识的渐进内核蒸馏法(PKKD)增加可训练参数。一个具有同一结构的卷进神经网络(CNN)同时被初始化并被培训为教师网络、ANNN和CNNNN的特征和重量。这里的主要原因是,使用$ell_1-norm 来消除精度下降的新的空间。在高空空间进行类似的评估,以便用基于内核循环法的方法消除其分布的差异。最后,所需要的ANNW(PNS)基于地面和教师的信息,逐渐地学习。拟议的高端神经网络(NFAS-NFA-NS-NS-NAS-NSAS-ID)的高级性能标准,然后在A-NFAS-NFAS-I-I-I-I-I-I-st-IAR-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-S-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-T-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-