ResNets and its variants play an important role in various fields of image recognition. This paper gives another variant of ResNets, a kind of cross-residual learning networks called C-ResNets, which has less computation and parameters than ResNets. C-ResNets increases the information interaction between modules by densifying jumpers and enriches the role of jumpers. In addition, some meticulous designs on jumpers and channels counts can further reduce the resource consumption of C-ResNets and increase its classification performance. In order to test the effectiveness of C-ResNets, we use the same hyperparameter settings as fine-tuned ResNets in the experiments. We test our C-ResNets on datasets MNIST, FashionMnist, CIFAR-10, CIFAR-100, CALTECH-101 and SVHN. Compared with fine-tuned ResNets, C-ResNets not only maintains the classification performance, but also enormously reduces the amount of calculations and parameters which greatly save the utilization rate of GPUs and GPU memory resources. Therefore, our C-ResNets is competitive and viable alternatives to ResNets in various scenarios. Code is available at https://github.com/liangjunhello/C-ResNet
翻译:ResNet及其变体在图像识别的不同领域发挥着重要的作用。 本文提供了另一个ResNet的变种, 即所谓的C-ResNet, 一种叫C-ResNet的跨反转学习网络, 其计算和参数少于ResNet。 C-ResNet通过放大跳跃者, 增加模块之间的信息互动, 并丰富跳跃者的作用。 此外, 一些对跳跃者和频道的仔细设计可以进一步减少 C-ResNet的资源消耗, 并提高其分类性能。 为了测试C-ResNet的有效性, 我们在实验中使用与微调的ResNet相同的超参数设置。 我们在MMIST、 FashionMnist、 CIFAR- 10、 CIFAR- 100、 CALTECH- 101 和 SVHN等数据集上测试我们的C-ResNet。 与微调的ResNet、 C-ResNet不仅保持分类性能,而且大大降低了大大节省了GPU和GPU记忆资源的利用率的计算和参数。 因此, 我们的C-ResgNet/Resjuns Codeal 是各种具有竞争力的替代方案。