Binary neural network (BNN) provides a promising solution to deploy parameter-intensive deep single image super-resolution (SISR) models onto real devices with limited storage and computational resources. To achieve comparable performance with the full-precision counterpart, most existing BNNs for SISR mainly focus on compensating the information loss incurred by binarizing weights and activations in the network through better approximations to the binarized convolution. In this study, we revisit the difference between BNNs and their full-precision counterparts and argue that the key for good generalization performance of BNNs lies on preserving a complete full-precision information flow as well as an accurate gradient flow passing through each binarized convolution layer. Inspired by this, we propose to introduce a full-precision skip connection or its variant over each binarized convolution layer across the entire network, which can increase the forward expressive capability and the accuracy of back-propagated gradient, thus enhancing the generalization performance. More importantly, such a scheme is applicable to any existing BNN backbones for SISR without introducing any additional computation cost. To testify its efficacy, we evaluate it using four different backbones for SISR on four benchmark datasets and report obviously superior performance over existing BNNs and even some 4-bit competitors.
翻译:二进制神经网络(BNN)为将参数密集深度单一图像超分辨率模型应用到储存和计算资源有限的真正装置上提供了一个大有希望的解决办法。为了实现与完全精准对应方的可比性能,SISSR现有多数现有BNN公司主要侧重于补偿网络中因二进制权重和激活而蒙受的信息损失,办法是更好地接近二进制共振过程。在本研究中,我们重新审视BNN公司与其完全精准的对等公司之间的差别,并争论说,BNN公司良好通用性能的关键在于保持完全精准的信息流动以及精确的梯度流通过每个二进制共进层。受此启发,我们提议在整个网络中引入一个完全精准跳跃连接或其变异的每个二进制电波层上的信息损失,这可以提高前进式表达能力和回流精度梯度的准确度,从而提高通用性能。更重要的是,这种计划适用于SISRS公司现有的任何骨干骨干都适用现有的BNN公司,而无需引入任何额外的计算成本。我们提议在整个网络上对其4的四进制数据进行评估。