Along with the rapid development of real-world applications, higher requirements on the accuracy and efficiency of image super-resolution (SR) are brought forward. Though existing methods have achieved remarkable success, the majority of them demand plenty of computational resources and large amount of RAM, and thus they can not be well applied to mobile device. In this paper, we aim at designing efficient architecture for 8-bit quantization and deploy it on mobile device. First, we conduct an experiment about meta-node latency by decomposing lightweight SR architectures, which determines the portable operations we can utilize. Then, we dig deeper into what kind of architecture is beneficial to 8-bit quantization and propose anchor-based plain net (ABPN). Finally, we adopt quantization-aware training strategy to further boost the performance. Our model can outperform 8-bit quantized FSRCNN by nearly 2dB in terms of PSNR, while satisfying realistic needs at the same time. Code is avaliable at https://github.com/NJU- Jet/SR_Mobile_Quantization.
翻译:在现实世界应用的迅速发展的同时,对图像超分辨率(SR)的准确性和效率提出了更高的要求,尽管现有方法取得了显著的成功,但大多数方法都要求大量的计算资源和大量的内存,因此无法很好地应用于移动设备。在本文中,我们的目标是设计8位位数的高效结构,并将其安装在移动设备上。首先,我们通过分解轻量的SR结构,对超分辨率进行元值的试验,这种结构决定着我们可以利用的便携式操作。然后,我们更深入地探索何种结构有利于8位数的量化,并提出基于锚的平原网(ABPN)。最后,我们采取了四分法培训战略,以进一步提升性能。我们的模型在PSNR方面可以比8位数的FSRCN高出近2位位数,同时满足现实需要。代码在https://github.com/NJU-Jet/SR_Mobileuatization上是有效的。