Super-resolution as an ill-posed problem has many high-resolution candidates for a low-resolution input. However, the popular $\ell_1$ loss used to best fit the given HR image fails to consider this fundamental property of non-uniqueness in image restoration. In this work, we fix the missing piece in $\ell_1$ loss by formulating super-resolution with neural networks as a probabilistic model. It shows that $\ell_1$ loss is equivalent to a degraded likelihood function that removes the randomness from the learning process. By introducing a data-adaptive random variable, we present a new objective function that aims at minimizing the expectation of the reconstruction error over all plausible solutions. The experimental results show consistent improvements on mainstream architectures, with no extra parameter or computing cost at inference time.
翻译:超级解析是一个错误的问题, 有很多高分辨率的低分辨率输入对象。 然而, 用于最符合特定 HR 图像的流行 $\ ell_ 1$ 损失, 却未能考虑到图像恢复过程中非独一性的基本特性。 在这项工作中, 我们用神经网络的超分辨率作为概率模型来将缺失的碎片以$ ell_ 1美元修正为损失。 它表明 $\ ell_ 1$ 损失相当于一种退化的可能性功能, 它会消除学习过程中的随机性。 通过引入一个数据适应性随机变量, 我们提出了一个新的目标功能, 目的是尽可能减少重建错误对所有合理解决方案的预期。 实验结果显示主流结构的不断改进, 没有额外的参数, 也没有在推论时间计算成本 。