Sparsity priors are commonly used in denoising and image reconstruction. For analysis-type priors, a dictionary defines a representation of signals that is likely to be sparse. In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error. This defines a hierarchical optimization problem, which can be cast as a bi-level optimization. Yet, this problem is unsolvable, as reconstructions and their derivative wrt the dictionary have no closed-form expression. However, reconstructions can be iteratively computed using the Forward-Backward splitting (FB) algorithm. In this paper, we approximate reconstructions by the output of the aforementioned FB algorithm. Then, we leverage automatic differentiation to evaluate the gradient of this output wrt the dictionary, which we learn with projected gradient descent. Experiments show that our algorithm successfully learns the 1D Total Variation (TV) dictionary from piecewise constant signals. For the same case study, we propose to constrain our search to dictionaries of 0-centered columns, which removes undesired local minima and improves numerical stability.
翻译:分层前缀通常用于拆分和图像重建。 对于分析类型前缀, 字典定义了可能是稀疏的信号的表示。 在多数情况下, 字典并不为人所知, 并且要通过将重建错误最小化的方式从两对地面真实信号和测量中回收。 这定义了等级优化问题, 可以作为一种双级优化。 然而, 这个问题是无法解析的, 因为在词典的重建及其衍生符没有封闭式表达方式。 但是, 重建可以使用前向- 后向分解算法( FB) 来迭代计算。 在本文中, 我们用上述 FB 算法的输出来估计重建情况。 然后, 我们利用自动区分法来评估该字典输出的梯度, 我们用预测的梯度来学习 。 实验显示我们的算法成功地从小写不变的信号中学习了 1D 总计 字典 。 对于同一案例研究, 我们提议限制我们搜索 0 圆 的词典, 来去除不理想的本地迷你 。