The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by deep generative networks). In this work, we study the algorithmic aspects of such a learning-based approach from a theoretical perspective. For certain generative network architectures, we establish a simple non-convex algorithmic approach that (a) theoretically enjoys linear convergence guarantees for certain linear and nonlinear inverse problems, and (b) empirically improves upon conventional techniques such as back-propagation. We support our claims with the experimental results for solving various inverse problems. We also propose an extension of our approach that can handle model mismatch (i.e., situations where the generative network prior is not exactly applicable). Together, our contributions serve as building blocks towards a principled use of generative models in inverse problems with more complete algorithmic understanding.
翻译:在这项工作中,我们从理论角度研究这种以学习为基础的方法的算法方面。对于某些基因化网络结构,我们建立了一种简单的非基因学算法方法,即(a) 理论上对某些线性和非线性问题享有线性趋同保证,(b) 经验性地改进传统技术,例如反向分析。我们支持我们的要求,试验结果解决各种反向问题。我们还提议扩大我们能够处理模式不匹配的方法(例如,以前基因化网络并不完全适用的情况)。我们的贡献共同构成一些基础,以便有原则地使用基因化模型,更全面地理解算法学问题。