We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow for efficient approximate residual calculations in AA, which in turn reduce computational time and memory storage while maintaining convergence. Specifically, we propose a reduced variant of AA, which consists in projecting the least-squares used to compute the Anderson mixing onto a subspace of reduced dimension. The dimensionality of this subspace adapts dynamically at each iteration as prescribed by computable heuristic quantities guided by rigorous theoretical error bounds. The use of heuristics to monitor the error introduced by approximate calculations, combined with the check on monotonicity of the convergence, ensures the convergence of the numerical scheme within a prescribed tolerance threshold on the residual. We numerically show and assess the performance of AA with approximate calculations on: (i) linear deterministic fixed-point iterations arising from the Richardson's scheme to solve linear systems with open-source benchmark matrices with various preconditioners and (ii) non-linear deterministic fixed-point iterations arising from non-linear time-dependent Boltzmann equations.
翻译:暂无翻译