Deep learning has been used to image compressive sensing (CS) for enhanced reconstruction performance. However, most existing deep learning methods train different models for different subsampling ratios, which brings additional hardware burden. In this paper, we develop a general framework named scalable deep compressive sensing (SDCS) for the scalable sampling and reconstruction (SSR) of all existing end-to-end-trained models. In the proposed way, images are measured and initialized linearly. Two sampling masks are introduced to flexibly control the subsampling ratios used in sampling and reconstruction, respectively. To make the reconstruction model adapt to any subsampling ratio, a training strategy dubbed scalable training is developed. In scalable training, the model is trained with the sampling matrix and the initialization matrix at various subsampling ratios by integrating different sampling matrix masks. Experimental results show that models with SDCS can achieve SSR without changing their structure while maintaining good performance, and SDCS outperforms other SSR methods.
翻译:深度学习已被用于为增强重建绩效而进行图像压缩感测;然而,大多数现有的深层学习方法都为不同的子抽样比例而培训不同的模型,从而带来额外的硬件负担。在本文件中,我们开发了一个总框架,名为可缩放深度压缩感测(SDCS),用于现有所有终端到终端培训模型的可缩放抽样和重建(SSR)。按照提议的方式,对图像进行测量和初始化线性。引入了两个取样面罩,以灵活控制取样和重建中使用的子抽样比。为了使重建模型适应任何子抽样比,我们制定了一个称为可缩放培训的培训战略。在可缩放式培训中,通过整合不同的取样矩阵面具,在各种子抽样比重中用取样矩阵和初始化矩阵对模型进行培训。实验结果表明,SDCS可以在保持良好性能的同时不改变结构,而SDCS优于其他SSR方法。