We solve the analysis sparse coding problem considering a combination of convex and non-convex sparsity promoting penalties. The multi-penalty formulation results in an iterative algorithm involving proximal-averaging. We then unfold the iterative algorithm into a trainable network that facilitates learning the sparsity prior. We also consider quantization of the network weights. Quantization makes neural networks efficient both in terms of memory and computation during inference, and also renders them compatible for low-precision hardware deployment. Our learning algorithm is based on a variant of the ADAM optimizer in which the quantizer is part of the forward pass and the gradients of the loss function are evaluated corresponding to the quantized weights while doing a book-keeping of the high-precision weights. We demonstrate applications to compressed image recovery and magnetic resonance image reconstruction. The proposed approach offers superior reconstruction accuracy and quality than state-of-the-art unfolding techniques and the performance degradation is minimal even when the weights are subjected to extreme quantization.
翻译:在结合convex 和非convex sparity 促进处罚的情况下,我们解决了分析稀疏的编码问题。多处配方产生了一种迭代算法,其中含有预兆-挥霍性。我们然后将迭代算法放入一个可训练的网络,便于在之前学习宽度。我们还考虑网络重量的定量化。量化使神经网络在内存和计算方面都具有效率,并使它们与低精度硬件的部署相容。我们的学习算法基于ADAM优化器的变异法,在这种变异法中,量化器是远端通路的一部分,而损失函数的梯度在进行高精度重量的记账管理时被评估与四分制重量相对应。我们展示了压缩图像恢复和磁共振图像重建的应用。拟议方法提供了比状态正在开发的技术更高的重建精确度和质量,而且性能退化也很小,即使重量处于极端的四分化状态。