It is promising to solve linear inverse problems by unfolding iterative algorithms (e.g., iterative shrinkage thresholding algorithm (ISTA)) as deep neural networks (DNNs) with learnable parameters. However, existing ISTA-based unfolded algorithms restrict the network architectures for iterative updates with the partial weight coupling structure to guarantee convergence. In this paper, we propose hybrid ISTA to unfold ISTA with both pre-computed and learned parameters by incorporating free-form DNNs (i.e., DNNs with arbitrary feasible and reasonable network architectures), while ensuring theoretical convergence. We first develop HCISTA to improve the efficiency and flexibility of classical ISTA (with pre-computed parameters) without compromising the convergence rate in theory. Furthermore, the DNN-based hybrid algorithm is generalized to popular variants of learned ISTA, dubbed HLISTA, to enable a free architecture of learned parameters with a guarantee of linear convergence. To our best knowledge, this paper is the first to provide a convergence-provable framework that enables free-form DNNs in ISTA-based unfolded algorithms. This framework is general to endow arbitrary DNNs for solving linear inverse problems with convergence guarantees. Extensive experiments demonstrate that hybrid ISTA can reduce the reconstruction error with an improved convergence rate in the tasks of sparse recovery and compressive sensing.
翻译:通过发展具有可学习参数的深层神经网络(DNNS)的迭代算法(例如迭代缩缩阈算法(ISTA)),解决线性反问题的前景是大有希望的;然而,以ISTA为基础的现有演化算法限制了以部分权重混合结构进行迭代更新的网络结构,以保证趋同;在本文件中,我们建议混合ISTA采用自由格式的DNS(即具有任意可行和合理的网络架构的DNS),同时确保理论趋同;我们首先发展HCISTA,以提高经典ISTA的效率和灵活性(加上预先计算参数),同时不影响理论上的趋同率;此外,基于DNNTA的混合算法普遍适用于已学的ISTA的流行变异体,称为HLISTA,以便建立一个自由的学习参数自由结构,保证线性趋同;我们最了解的是,本文件首先提供一个可实现趋同的框架,使以ISTA为基础的DNTA(采用预先计算参数)提高的趋同率;这一框架可以减少不断修正的ISTA的S的回收率,从而减少不断修正的SRA的SRA的恢复率,从而降低不断的不断的不断的不断变同率。