Most current image super-resolution (SR) methods based on convolutional neural networks (CNNs) use residual learning in network structural design, which favors to effective back propagation and hence improves SR performance by increasing model scale. However, residual networks suffer from representational redundancy by introducing identity paths that impede the full exploitation of model capacity. Besides, blindly enlarging network scale can cause more problems in model training, even with residual learning. In this paper, a novel fully channel-concatenated network (FC$^2$N) is presented to make further mining of representational capacity of deep models, in which all interlayer skips are implemented by a simple and straightforward operation, i.e., weighted channel concatenation (WCC), followed by a 1$\times$1 conv layer. Based on the WCC, the model can achieve the joint attention mechanism of linear and nonlinear features in the network, and presents better performance than other state-of-the-art SR models with fewer model parameters. To our best knowledge, FC$^2$N is the first CNN model that does not use residual learning and reaches network depth over 400 layers. Moreover, it shows excellent performance in both largescale and lightweight implementations, which illustrates the full exploitation of the representational capacity of the model.
翻译:以进化神经网络(CNNs)为基础的大多数当前图像超分辨率(SR)方法,在网络结构设计中使用残留学习方法,这有利于有效反向传播,从而通过扩大模型规模来提高SR的性能;然而,残余网络由于引入阻碍充分开发模型能力的身份路径而出现代表冗余;此外,盲目扩大网络规模,即使有剩余学习,也会在模式培训方面造成更多问题;本文介绍一个全新的全频道配置网络(FC$2$N),以进一步挖掘深层模型的代表性能力(FC$2$N),其中所有层间跳板都通过简单、直截的操作实施,即加权通道连接(WCCC),然后是1美元/时间 1 康夫层。根据WCC,模型可以实现网络线性和非线性和非线性特征的联合关注机制,其表现优于模型中模型参数较少的其他状态型号(FC$2N),这是第一个CNN模型,其中所有层间跳板跳板都通过一个简单的操作,即加权通道连接式(WCC),然后是1美元,然后是1美元,然后是1个1美元/ticate concate concate concate concate concate concate concate concate concatedatedate),然后是1 timestation,然后是1 timest expres,然后是1),该模型,然后是1,然后是1,然后是1,然后是1x不使用1x不使用1x,它不使用1x显示深度深度深度的深度的深度的深度的深度能力,在深度的深度操作,它展示了深度的深度的深度操作。