Recently, deep neural networks (DNNs) have shown advantages in accelerating optimization algorithms. One approach is to unfold finite number of iterations of conventional optimization algorithms and to learn parameters in the algorithms. However, these are forward methods and are indeed neither iterative nor convergent. Here, we present a novel DNN-based convergent iterative algorithm that accelerates conventional optimization algorithms. We train a DNN to yield parameters in scaled gradient projection method. So far, these parameters have been chosen heuristically, but have shown to be crucial for good empirical performance. In simulation results, the proposed method significantly improves the empirical convergence rate over conventional optimization methods for various large-scale inverse problems in image processing.
翻译:最近,深神经网络(DNNs)在加速优化算法方面展示了优势。 一种方法是开发常规优化算法的有限迭代数,并在算法中学习参数。 但是,这些是前期方法,实际上既不迭接,也不融合。 在这里,我们展示了一个新的基于 DNN 的聚合迭代算法,加速常规优化算法。 我们训练了一个 DNN 以缩放梯度投影法生成参数。 到目前为止,这些参数是超常选择的,但已证明对良好的实证性表现至关重要。 在模拟结果中,拟议方法大大改善了图像处理中各种大反向问题的常规优化方法的经验趋同率。