We compare two approaches for solving high-dimensional eigenvalue problems with low-rank structure: the inexact Lanczos method and inexact polynomial-filtered subspace iteration. Inexactness stems from low-rank compression, enabling efficient representation of high-dimensional vectors in a low-rank tensor format. A primary challenge in these methods is that standard operations, such as matrix-vector products and linear combinations, increase tensor rank, necessitating rank truncation and hence approximation. The Lanczos method constructs an approximate orthonormal Krylov basis, which is often difficult to represent accurately using low-rank tensor formats, even when the eigenvectors themselves exhibit low-rank structure. In contrast, the low-rank polynomial-filtered subspace iteration uses approximate eigenvectors (Ritz vectors) directly as a subspace basis, bypassing the need for an orthonormal Krylov basis. Our analysis and numerical experiments demonstrate that inexact subspace iteration is much more robust to rank-truncation errors compared to the inexact Lanczos method. We further demonstrate that rank-truncated subspace iteration can converge for problems where the density matrix renormalization group method (DMRG) stagnates.
翻译:暂无翻译