There is a growing interest in deep model-based architectures (DMBAs) for solving imaging inverse problems by combining physical measurement models and learned image priors specified using convolutional neural nets (CNNs). For example, well-known frameworks for systematically designing DMBAs include plug-and-play priors (PnP), deep unfolding (DU), and deep equilibrium models (DEQ). While the empirical performance and theoretical properties of DMBAs have been widely investigated, the existing work in the area has primarily focused on their performance when the desired image prior is known exactly. This work addresses the gap in the prior work by providing new theoretical and numerical insights into DMBAs under mismatched CNN priors. Mismatched priors arise naturally when there is a distribution shift between training and testing data, for example, due to test images being from a different distribution than images used for training the CNN prior. They also arise when the CNN prior used for inference is an approximation of some desired statistical estimator (MAP or MMSE). Our theoretical analysis provides explicit error bounds on the solution due to the mismatched CNN priors under a set of clearly specified assumptions. Our numerical results compare the empirical performance of DMBAs under realistic distribution shifts and approximate statistical estimators.
翻译:人们对深层次模型型建筑(DMBAs)越来越感兴趣,通过结合物理测量模型和借助进化神经网(CNNs)指定的进化神经网(CONNs)的已知图像前科(PnP),深演(DU)和深平衡模型(DEQ)等系统设计DMBA的众所周知的框架包括插片和剧本前科(PnP),深演(DU)和深度平衡模型(DEQ)。虽然广泛调查了DMBA的实证性表现和理论性能,但这一领域的现有工作主要侧重于在事先确切了解所需图像时的性能。这项工作弥补了先前工作中的差距,在有错配的CNNCM前科前科中提供了对DMBA的理论和数字性能的新认识。当培训和测试数据之间发生分配变化时,例如由于测试图像的分布与CNNCEM使用的图像不同而自然出现误差,当CNN先前用于推断的有误差的一些统计估计性能(MAP或MMEE)的精确度假设下,我们的理论性能分析提供了明确的误差的解决方案。我们的数字结果比较了DMDMDMDMA的数值结果。