Recently, model-driven deep learning unrolls a certain iterative algorithm of a regularization model into a cascade network by replacing the first-order information (i.e., (sub)gradient or proximal operator) of the regularizer with a network module, which appears more explainable and predictable compared to common data-driven networks. Conversely, in theory, there is not necessarily such a functional regularizer whose first-order information matches the replaced network module, which means the network output may not be covered by the original regularization model. Moreover, up to now, there is also no theory to guarantee the global convergence and robustness (regularity) of unrolled networks under realistic assumptions. To bridge this gap, this paper propose to present a safeguarded methodology on network unrolling. Specifically, focusing on accelerated MRI, we unroll a zeroth-order algorithm, of which the network module represents the regularizer itself, so that the network output can be still covered by the regularization model. Furthermore, inspired by the ideal of deep equilibrium models, before backpropagating, we carry out the unrolled iterative network to converge to a fixed point to ensure the convergence. In case the measurement data contains noise, we prove that the proposed network is robust against noisy interference. Finally, numerical experiments show that the proposed network consistently outperforms the state-of-the-art MRI reconstruction methods including traditional regularization methods and other deep learning methods.
翻译:最近,由模型驱动的深层次学习将一个正规化模式的某种迭代算法转换成一个级联网络,用一个网络模块取代正规化器的第一阶信息(即(子)分级或近似操作者),与通用数据驱动的网络相比,该模块似乎更可以解释和预测。相反,理论上,不一定有这样一个功能化的正规化机制,其第一阶信息与替换的网络模块相匹配,这意味着网络输出可能不由原始的正规化模式所覆盖。此外,到目前为止,目前还没有任何理论可以保证在现实假设下,无滚动网络的全球趋同和稳健(常态)的(常态)。为了弥合这一差距,本文件提议提出一种保障网络松动的方法。具体地说,侧重于加速的 MRI,我们解动零级算法,网络模块本身代表了正规化模式本身,因此网络输出仍可以由正规化模式所覆盖。此外,在深平衡模式的理想启发下,在反正对前,我们将未滚动的迭接合网络连接到一个固定点(常态)的固定点,包括稳定的常规干扰重组。在最后,我们展示了稳定的网络模拟的模型的模型的学习方法。