In Deep Image Prior (DIP), a Convolutional Neural Network (CNN) is fitted to map a latent space to a degraded (e.g. noisy) image but in the process learns to reconstruct the clean image. This phenomenon is attributed to CNN's internal image-prior. We revisit the DIP framework, examining it from the perspective of a neural implicit representation. Motivated by this perspective, we replace the random or learned latent with Fourier-Features (Positional Encoding). We show that thanks to the Fourier features properties, we can replace the convolution layers with simple pixel-level MLPs. We name this scheme ``Positional Encoding Image Prior" (PIP) and exhibit that it performs very similarly to DIP on various image-reconstruction tasks with much less parameters required. Additionally, we demonstrate that PIP can be easily extended to videos, where 3D-DIP struggles and suffers from instability. Code and additional examples for all tasks, including videos, are available on the project page https://nimrodshabtay.github.io/PIP/
翻译:在深图像前端 (DIP) 中,一个革命性神经网络(CNN) 被设计成一个潜藏空间, 绘制退化( 吵闹) 图像, 但在此过程中学会重建干净图像。 这种现象归因于CNN的内部图像前端。 我们重新审视DIP框架, 从神经隐含的表达角度来审视它。 我们受此观点的驱动, 我们可以用 Fourier- Forier 特性( 位置编码) 来取代随机或学习的潜伏。 我们显示, 由于 Fourier 特性的特性, 我们可以用简单的像素级 MLPs 来取代卷层 。 我们命名这个“ Pistional 编码图像前端” (PIP) 方案, 并展示它与 DIP 在各种图像重建任务上的表现非常相似, 所需要的参数要少得多。 此外, 我们证明 PIP 可以很容易扩展为视频, 3D- DIP 挣扎和不稳定性。 代码和所有任务的更多例子, 包括视频, 都在项目网页 https:// nimbrodshabtay.giubio/ Ppio 。