The joint bidiagonalization process of a matrix pair $\{A,L\}$ can be used to develop iterative regularization algorithms for large scale ill-posed problems in general-form Tikhonov regularization $\min_x\left\{\|Ax-b\|_{2}^2+\lambda^2\|Lx\|_{2}^2\right\}$ or the essentially equivalent one $\min\|Lx\|_{2} \ \mbox{{\rm s.t.}} \ x\in\mathcal{S} = \{x|\ \|Ax-b\|_{2}\leq \eta\|e\|_{2}\}$, where $e$ is a Gaussian white noise, $L$ is a regularization matrix and $\eta>1$ slightly. A bottleneck of the algorithms is that a large scale inner least squares problem with $(A^{T}, L^{T})^{T}$ as the coefficient matrix must be solved at each outer iteration, which may be costly, especially when the solution accuracy of these problems is high. In this paper, we give a detailed investigation on the solution accuracy requirement on the inner least squares problems and propose a practical stopping criterion for inner iterations. The results show that for ill-posed problems with not too small noise levels, the solution accuracy of the inner least squares problems can be relaxed considerably while it will not reduce the accuracy of regularized solutions, thus the overall efficiency of the algorithms can be improved substantially. Numerical experiments are made to illustrate our results and show some numerical performance of the algorithms.


翻译:矩阵配对的 $ @ {A,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ x\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\x\\\\x\xxxxxxxxxxx\\\\\\\\\\\\\\\xxxxxxxxxxxxxxxx\\\\\\\xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxLLLLLLLLird recird re con con con recre recre regidedede con con con con con con cons recre regn s con condedededededededededededecentrmalml dxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

0
下载
关闭预览

相关内容

Stabilizing Transformers for Reinforcement Learning
专知会员服务
60+阅读 · 2019年10月17日
Transferring Knowledge across Learning Processes
CreateAMind
29+阅读 · 2019年5月18日
已删除
将门创投
9+阅读 · 2018年12月19日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
VIP会员
相关VIP内容
Stabilizing Transformers for Reinforcement Learning
专知会员服务
60+阅读 · 2019年10月17日
相关资讯
Transferring Knowledge across Learning Processes
CreateAMind
29+阅读 · 2019年5月18日
已删除
将门创投
9+阅读 · 2018年12月19日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
Top
微信扫码咨询专知VIP会员