Convolutional neural network (CNN) has achieved great success on image super-resolution (SR). However, most deep CNN-based SR models take massive computations to obtain high performance. Downsampling features for multi-resolution fusion is an efficient and effective way to improve the performance of visual recognition. Still, it is counter-intuitive in the SR task, which needs to project a low-resolution input to high-resolution. In this paper, we propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task. The network contains pixel-unshuffled downsampling and Self-Residual Depthwise Separable Convolutions. Specifically, we utilize pixel-unshuffle operation to downsample the input features and use grouped convolution to reduce the channels. Besides, we enhance the depthwise convolution's performance by adding the input feature to its output. Experiments on benchmark datasets show that our HPUN achieves and surpasses the state-of-the-art reconstruction performance with fewer parameters and computation costs.
翻译:在图像超分辨率(SR)方面,进化神经网络(CNN)取得了巨大成功。然而,大多数以CNN为基础的最深的超分辨率(SR)模型都进行了大规模计算,以获得高性能。多分辨率聚合的抽查功能是提高视觉识别性能的一个高效和有效的方法。不过,这在SR任务中是反直觉的,需要将低分辨率输入投射到高分辨率。此外,在本文件中,我们建议通过在SR任务中引入高效和高效的下调模块(HPUN),建立一个新型的混合像素-无分辨率网络(HPUN) 。这个网络包含无节能的下调和自恢复的自恢复性地球分解变。具体地说,我们使用像素-无缝操作来降低输入特性的缩影,并利用组合变动来缩小频道。此外,我们通过将输入特性添加到其产出中来提高深度革命性能。关于基准数据集的实验显示,我们的HPUN实现了并超过了州重建成本和成本较低的标准。