Deep image prior (DIP) was recently introduced as an effective unsupervised approach for image restoration tasks. DIP represents the image to be recovered as the output of a deep convolutional neural network, and learns the network's parameters such that the output matches the corrupted observation. Despite its impressive reconstructive properties, the approach is slow when compared to supervisedly learned, or traditional reconstruction techniques. To address the computational challenge, we bestow DIP with a two-stage learning paradigm: (i) perform a supervised pretraining of the network on a simulated dataset; (ii) fine-tune the network's parameters to adapt to the target reconstruction task. We provide a thorough empirical analysis to shed insights into the impacts of pretraining in the context of image reconstruction. We showcase that pretraining considerably speeds up and stabilizes the subsequent reconstruction task from real-measured 2D and 3D micro computed tomography data of biological specimens. The code and additional experimental materials are available at https://educateddip.github.io/docs.educated_deep_image_prior/.
翻译:之前的深度图像( DIP) 最近被引入为图像恢复任务的一种有效且不受监督的方法。 DIP 是作为深层神经神经网络的输出而恢复的图像, 并学习了网络参数, 使输出与腐败观测相匹配。 尽管其令人印象深刻的重建特性, 但与监督的学习或传统重建技术相比,该方法缓慢。 为了应对计算挑战, 我们给 DIP提供一个两阶段学习模式:(一) 在模拟数据集上对网络进行监督的预培训;(二) 微调网络的参数,以适应目标重建任务。 我们提供了透彻的经验分析, 以揭示图像重建背景下的培训前工作的影响。 我们展示了从实际测量的 2D 和 3D 微观计算生物标本的重建任务大大加快和稳定。 代码和其他实验材料可在 https:// legeddip.github.io/docs. edgradudeed_ deep_ image_ prior/ 上查阅 。