Statistical inverse learning theory, a field that lies at the intersection of inverse problems and statistical learning, has lately gained more and more attention. In an effort to steer this interplay more towards the variational regularization framework, convergence rates have recently been proved for a class of convex, $p$-homogeneous regularizers with $p \in (1,2]$, in the symmetric Bregman distance. Following this path, we take a further step towards the study of sparsity-promoting regularization and extend the aforementioned convergence rates to work with $\ell^p$-norm regularization, with $p \in (1,2)$, for a special class of non-tight Banach frames, called shearlets, and possibly constrained to some convex set. The $p = 1$ case is approached as the limit case $(1,2) \ni p \rightarrow 1$, by complementing numerical evidence with a (partial) theoretical analysis, based on arguments from $\Gamma$-convergence theory. We numerically demonstrate our theoretical results in the context of X-ray tomography, under random sampling of the imaging angles, using both simulated and measured data.
翻译:统计反向学习理论,即位于逆向问题和统计学习交汇处的统计反向学习理论,最近得到越来越多的关注。为了努力将这种相互作用更多地引导到变异正规化框架,最近已经证明,在对称布雷格曼距离(1,2,2美元)中,某类相形相形色色的、美元=1美元(1,2美元)的相形色调整者,其趋同率在对称布雷格曼距离范围内,对于某类非紧型香蕉框架的特殊类别(1,2美元)而言,其趋同率最近已经证明。在这条路径之后,我们又迈出了一步,研究宽度促进正规化,并将上述趋同率扩大到以$\Gamma$-convergence理论为根据的(部分)理论进行工作。我们用数字方式展示了我们在X光光图像和随机取样的 X-光图像角度下测量的理论结果。