Deep learning provides a new avenue for image restoration, which demands a delicate balance between fine-grained details and high-level contextualized information during recovering the latent clear image. In practice, however, existing methods empirically construct encapsulated end-to-end mapping networks without deepening into the rationality, and neglect the intrinsic prior knowledge of restoration task. To solve the above problems, inspired by Taylor's Approximations, we unfold Taylor's Formula to construct a novel framework for image restoration. We find the main part and the derivative part of Taylor's Approximations take the same effect as the two competing goals of high-level contextualized information and spatial details of image restoration respectively. Specifically, our framework consists of two steps, correspondingly responsible for the mapping and derivative functions. The former first learns the high-level contextualized information and the later combines it with the degraded input to progressively recover local high-order spatial details. Our proposed framework is orthogonal to existing methods and thus can be easily integrated with them for further improvement, and extensive experiments demonstrate the effectiveness and scalability of our proposed framework.
翻译:深层学习为恢复图像提供了一个新的途径,这要求在恢复潜在清晰图像的过程中,在细细细节和高层次背景信息之间保持微妙的平衡。然而,在实践中,现有方法在不深化合理性的情况下以经验方式构建包装端到端的绘图网络,忽视了恢复任务的内在先前知识。为了解决上述问题,在泰勒的亲身经历的启发下,我们推出泰勒公式,以构建一个图像恢复的新框架。我们发现泰勒应用的主要部分和衍生部分与高层次背景信息和图像恢复空间细节这两个相互竞争的目标具有相同的效果。具体地说,我们的框架由两个步骤组成,对应地负责绘图和衍生功能。前一个步骤首先学习高层次背景信息,然后将其与退化的投入结合起来,以逐渐恢复当地高层次的空间细节。我们提议的框架与现有方法有交错,因此可以很容易与它们结合,以便进一步改进,广泛的实验表明我们拟议框架的有效性和可扩展性。