Convolutional Neural Networks (CNN) increase depth by stacking convolutional layers, and deeper network models perform better in image recognition. Empirical research shows that simply stacking convolutional layers does not make the network train better, and skip connection (residual learning) can improve network model performance. For the image classification task, models with global densely connected architectures perform well in large datasets like ImageNet, but are not suitable for small datasets such as CIFAR-10 and SVHN. Different from dense connections, we propose two new algorithms to connect layers. Baseline is a densely connected network, and the networks connected by the two new algorithms are named ShortNet1 and ShortNet2 respectively. The experimental results of image classification on CIFAR-10 and SVHN show that ShortNet1 has a 5% lower test error rate and 25% faster inference time than Baseline. ShortNet2 speeds up inference time by 40% with less loss in test accuracy. Code and pre-trained models are available at https://github.com/RuiyangJu/Connection_Reduction.
翻译:进化神经网络(CNN)通过堆叠进进化层增加深度,更深的网络模型在图像识别方面表现更好。 经验性研究表明,仅仅堆叠进进进化层并不能改善网络列车,而跳过连接(留级学习)可以改善网络模型的性能。 对于图像分类任务,具有全球密集连接结构的模型在图像网络等大型数据集中表现良好,但不适合像CIFAR-10和SVHN这样的小型数据集。 我们建议使用两种新的算法连接层。基准是一个密集的连接网络,而由两种新算法连接的网络分别名为ShortNet1和ShortNet2。 CIFAR-10和SVHN的图像分类实验结果显示,短Net1的测试误差率比基线低5%,误差率快25%。 短Net2加速推导速度40%,测试准确性损失更小。 代码和预培训模型可在 https://github.com/RuyangJu/Connction_Regging上查阅。