In this paper, we present a predictor-corrector strategy for constructing rank-adaptive dynamical low-rank approximations (DLRAs) of matrix-valued ODE systems. The strategy is a compromise between (i) low-rank step-truncation approaches that alternately evolve and compress solutions and (ii) strict DLRA approaches that augment the low-rank manifold using subspaces generated locally in time by the DLRA integrator. The strategy is based on an analysis of the error between a forward temporal update into the ambient full-rank space, which is typically computed in a step-truncation approach before re-compressing, and the standard DLRA update, which is forced to live in a low-rank manifold. We use this error, without requiring its full-rank representation, to correct the DLRA solution. A key ingredient for maintaining a low-rank representation of the error is a randomized singular value decomposition (SVD), which introduces some degree of stochastic variability into the implementation. The strategy is formulated and implemented in the context of discontinuous Galerkin spatial discretizations of partial differential equations and applied to several versions of DLRA methods found in the literature, as well as a new variant. Numerical experiments comparing the predictor-corrector strategy to other methods demonstrate robustness to overcome short-comings of step truncation or strict DLRA approaches: the former may require more memory than is strictly needed while the latter may miss transients solution features that cannot be recovered. The effect of randomization, tolerances, and other implementation parameters is also explored.
翻译:在本文中,我们提出了一个预测或校正策略,用于构建矩阵估值的 ODE 系统中的级适动态低空近似值(DLAs) 。 战略是以下两种方法之间的折中:(一) 交替演变和压缩解决方案的低级分流方法,以及(二) 严格的 DLRA 方法,即使用DLA 整合器及时在当地生成的子空间来增加低级分层。 战略的基础是分析以下两种方法之间的差错:(一) 低级分流(DLAs),即,在重压缩之前,通常以分流方法进行分流(DLARS),而标准 DRA 更新(DRA) 则被迫生活在低级的分流方法中。 我们使用这一错误,而无需全级代表法来纠正 DLRA 解决方案。 保持低级代表度的一个要素是随机奇特异的分位(SVD),它可能给执行过程带来某种程度的随机变异性。 该战略的制定和实施是在不连续的短时间误差度(Slovecal) lagial- lagial- delicalization) 方法中, ladealization) 也无法将一些新的分解方法, 而后, 将新的分解方法作为后再比较后变式的偏差式的后演法, 。