Despite breakthrough advances in image super-resolution (SR) with convolutional neural networks (CNNs), SR has yet to enjoy ubiquitous applications due to the high computational complexity of SR networks. Quantization is one of the promising approaches to solve this problem. However, existing methods fail to quantize SR models with a bit-width lower than 8 bits, suffering from severe accuracy loss due to fixed bit-width quantization applied everywhere. In this work, to achieve high average bit-reduction with less accuracy loss, we propose a novel Content-Aware Dynamic Quantization (CADyQ) method for SR networks that allocates optimal bits to local regions and layers adaptively based on the local contents of an input image. To this end, a trainable bit selector module is introduced to determine the proper bit-width and quantization level for each layer and a given local image patch. This module is governed by the quantization sensitivity that is estimated by using both the average magnitude of image gradient of the patch and the standard deviation of the input feature of the layer. The proposed quantization pipeline has been tested on various SR networks and evaluated on several standard benchmarks extensively. Significant reduction in computational complexity and the elevated restoration accuracy clearly demonstrate the effectiveness of the proposed CADyQ framework for SR. Codes are available at https://github.com/Cheeun/CADyQ.
翻译:尽管图像超分辨率(SR)在图像超分辨率(SR)和神经神经网络(CNNs)方面取得了突破性的进展,但是由于SR网络的计算复杂性很高,SR尚未享受无处不在的应用。量化是解决这一问题的一个有希望的办法。然而,现有方法未能对SR模型进行量化,其位数小于8位,由于各地应用固定的位宽四分制和给定的本地图像补丁而导致严重精度损失。在这项工作中,为了实现高平均值的比特减少,降低准确性损失,我们建议为SR网络采用新的内容-Aware 动态量化(CADYQ) 方法,根据输入图像的本地内容,将最佳部分分配给本地区域和层。为此,引入了一个可培训的位选模式,以确定每个层的比特维度和四分化水平,以及给定的本地图像补丁。这一模块受量化敏感性的调适度,这是通过使用图像缩放度的图像缩放度和标准缩放QQQQQQQ(C) 方法,根据本地输入的精度框架的精度,在数据库中明确测试了现有输入的精度。拟议大幅恢复标准的节度框架。