Flow-based methods have demonstrated promising results in addressing the ill-posed nature of super-resolution (SR) by learning the distribution of high-resolution (HR) images with the normalizing flow. However, these methods can only perform a predefined fixed-scale SR, limiting their potential in real-world applications. Meanwhile, arbitrary-scale SR has gained more attention and achieved great progress. Nonetheless, previous arbitrary-scale SR methods ignore the ill-posed problem and train the model with per-pixel L1 loss, leading to blurry SR outputs. In this work, we propose "Local Implicit Normalizing Flow" (LINF) as a unified solution to the above problems. LINF models the distribution of texture details under different scaling factors with normalizing flow. Thus, LINF can generate photo-realistic HR images with rich texture details in arbitrary scale factors. We evaluate LINF with extensive experiments and show that LINF achieves the state-of-the-art perceptual quality compared with prior arbitrary-scale SR methods.
翻译:以流动为基础的方法在解决超分辨率(SR)的不良性质方面,通过学习高分辨率(HR)图像的传播与正常化过程,已经显示出很有希望的结果;然而,这些方法只能发挥预先定义的固定规模的SR,限制其在现实世界应用中的潜力;与此同时,任意规模的SR得到更多的注意并取得了很大进展;然而,以往的任意规模的SR方法忽视了错误的L1损失,对模型进行了培训,导致Ppixel L1损失,导致SR产出模糊不清;在这项工作中,我们建议“本地不透明流动”作为解决上述问题的统一办法;LINF模型在正常化过程的不同规模因素下,对纹理细节的分布进行模型;因此,LINF可以产生具有丰富的任意规模因素的纹理细节的摄影现实性HR图像;我们用广泛的实验来评价LINF,并表明LINF与先前的任意规模的SR方法相比,实现了最先进的概念质量。</s>