Capturing feature information effectively is of great importance in vision tasks. With the development of convolutional neural networks (CNNs), concepts like residual connection and multiple scales promote continual performance gains on diverse deep learning vision tasks. However, the existing methods do not organically combined advantages of these valid ideas. In this paper, we propose a novel CNN architecture called GoogLe2Net, it consists of residual feature-reutilization inceptions (ResFRI) or split residual feature-reutilization inceptions (Split-ResFRI) which create transverse passages between adjacent groups of convolutional layers to enable features flow to latter processing branches and possess residual connections to better process information. Our GoogLe2Net is able to reutilize information captured by foregoing groups of convolutional layers and express multi-scale features at a fine-grained level, which improves performances in image classification. And the inception we proposed could be embedded into inception-like networks directly without any migration costs. Moreover, in experiments based on popular vision datasets, such as CIFAR10 (97.94%), CIFAR100 (85.91%) and Tiny Imagenet (70.54%), we obtain better results on image classification task compared with other modern models.
翻译:在愿景任务中,有效获取特征信息非常重要。随着革命神经网络(CNNs)的发展,残余连接和多重规模等概念能够促进不同深层次学习愿景任务的持续绩效收益。然而,现有方法并不有机地结合这些有效理念的优势。在本文件中,我们提议建立一个名为GoogLe2Net的新型CNN架构,由残余特征再利用初始(ResFRI)或分离残余特征再利用初始(Split-ResfRI)组成,这些初始(Split-ResFRI)在相邻的卷变层群之间建立交叉通道,使特征能够流向后层处理分支,并拥有剩余连接,以获得更好的进程信息。我们的GoogLe2Net能够将前层群收集的信息重新利用起来,并在精细的层次上展示多尺度特征,提高图像分类的性能。我们提出的初始可以直接嵌入类似初始的网络,而无需任何迁移成本。此外,在基于广受欢迎的视觉数据集的实验中,如CIFAR10(97.94%)、CIFAR100(85.91%)和Tinalimnet图像模型(我们获得更好的现代任务分类)和Tinalimnet结果。