In this paper, we propose a unified framework of denoising score-based models in the context of graduated non-convex energy minimization. We show that for sufficiently large noise variance, the associated negative log density -- the energy -- becomes convex. Consequently, denoising score-based models essentially follow a graduated non-convexity heuristic. We apply this framework to learning generalized Fields of Experts image priors that approximate the joint density of noisy images and their associated variances. These priors can be easily incorporated into existing optimization algorithms for solving inverse problems and naturally implement a fast and robust graduated non-convexity mechanism.
翻译:在本文中,我们提议了一个统一框架,在分级非碳氢化合物能源最小化的背景下,采用基于分寸的分数模型。我们表明,如果噪音差异足够大,相关的负日志密度 -- -- 能量 -- -- 就会变成阴性。因此,分级的分数模型基本上遵循分级的非碳氢化合物超常性。我们应用这个框架来学习专家领域通用的图像前缀,这些前缀接近于噪音图像的共同密度及其相关差异。这些前缀可以很容易地纳入现有的优化算法,以便解决反向问题,自然地实施快速和稳健的分数非碳氢化合物机制。