We mainly analyze and solve the overfitting problem of deep image prior (DIP). Deep image prior can solve inverse problems such as super-resolution, inpainting and denoising. The main advantage of DIP over other deep learning approaches is that it does not need access to a large dataset. However, due to the large number of parameters of the neural network and noisy data, DIP overfits to the noise in the image as the number of iterations grows. In the thesis, we use hybrid deep image priors to avoid overfitting. The hybrid priors are to combine DIP with an explicit prior such as total variation or with an implicit prior such as a denoising algorithm. We use the alternating direction method-of-multipliers (ADMM) to incorporate the new prior and try different forms of ADMM to avoid extra computation caused by the inner loop of ADMM steps. We also study the relation between the dynamics of gradient descent, and the overfitting phenomenon. The numerical results show the hybrid priors play an important role in preventing overfitting. Besides, we try to fit the image along some directions and find this method can reduce overfitting when the noise level is large. When the noise level is small, it does not considerably reduce the overfitting problem.
翻译:我们主要分析并解决先前深图像(DIP)过深图像(DIP)的过深问题。 深图像( 深图像( 深图像) 之前可以解决反倒的问题, 如超分辨率、 油漆和拆除。 DIP 相对于其他深层学习方法的主要优势是, 它不需要访问大型数据集。 然而, 由于神经网络参数众多, 以及数据吵闹, 由于迭代次数的增加, DIP 与图像中的噪音有重叠关系。 在论文中, 我们使用混合深图像( 深图像) 前可以避免过度安装。 混合前可以将 DIP 与 明确之前的问题( 如完全变异) 或隐含的前算法( 如消音算算法 ) 结合起来。 我们使用交替的多维利器( ADMMM ) 的主要优点是, 不需要使用新的 ADMMM( ADMM) 格式来尝试不同的 ADMM( ADMM) 来避免由内环导致的额外计算 。 我们还研究变色的动态和超配现象之间的关系。 数字结果显示混合前在防止过度调整方面起着重要作用 。 此外, 我们试图将图像与某些方向相适应于某方向, 当噪音水平时, 当噪音时会大大降低噪音时, 。