This paper explores training efficient VGG-style super-resolution (SR) networks with the structural re-parameterization technique. The general pipeline of re-parameterization is to train networks with multi-branch topology first, and then merge them into standard 3x3 convolutions for efficient inference. In this work, we revisit those primary designs and investigate essential components for re-parameterizing SR networks. First of all, we find that batch normalization (BN) is important to bring training non-linearity and improve the final performance. However, BN is typically ignored in SR, as it usually degrades the performance and introduces unpleasant artifacts. We carefully analyze the cause of BN issue and then propose a straightforward yet effective solution. In particular, we first train SR networks with mini-batch statistics as usual, and then switch to using population statistics at the later training period. While we have successfully re-introduced BN into SR, we further design a new re-parameterizable block tailored for SR, namely RepSR. It consists of a clean residual path and two expand-and-squeeze convolution paths with the modified BN. Extensive experiments demonstrate that our simple RepSR is capable of achieving superior performance to previous SR re-parameterization methods among different model sizes. In addition, our RepSR can achieve a better trade-off between performance and actual running time (throughput) than previous SR methods. Codes will be available at https://github.com/TencentARC/RepSR.
翻译:本文探讨以结构再校准技术培训高效VGGG式超级分辨率(SR)网络的问题; 重新校准的一般管道是先用多部门表层学培训网络,然后将其合并为标准3x3的演进,以便进行高效推断; 在这项工作中,我们重新审查这些初级设计,并调查重新校准SR网络的必要组成部分; 首先,我们发现批次正常化(BN)对于使培训变得非线性并改进最后性能十分重要; 然而,BN通常在SR中被忽略,因为它通常会降低性能并引入不愉快的艺术品。 我们仔细分析BN问题的原因,然后提出直接而有效的解决方案。 特别是,我们首先用小型批量统计数据来培训SR网络,然后在稍后的培训期间转用人口统计数据。 虽然我们成功地将BN重新纳入SR(BN)纳入SR,但我们可以进一步设计一个新的可重新校准的SR(RESR)模块, 因为它包括清洁的残余路径,以及两种在SRN议题上扩大和快速的SR(SBR) 实现我们以往的递增业绩的方法。