There is growing awareness that errors in the model equations cannot be ignored in data assimilation methods such as four-dimensional variational assimilation (4D-Var). If allowed for, more information can be extracted from observations, longer time windows are possible, and the minimisation process is easier, at least in principle. Weak constraint 4D-Var estimates the model error and minimises a series of linear least-squares cost functionsfunctions, which can be achieved using the conjugate gradient (CG) method; minimising each cost function is called an inner loop. CG needs preconditioning to improve its performance. In previous work, limited memory preconditioners (LMPs) have been constructed using approximations of the eigenvalues and eigenvectors of the Hessian in the previous inner loop. If the Hessian changes significantly in consecutive inner loops, the LMP may be of limited usefulness. To circumvent this, we propose using randomised methods for low rank eigenvalue decomposition and use these approximations to cheaply construct LMPs using information from the current inner loop. Three randomised methods are compared. Numerical experiments in idealized systems show that the resulting LMPs perform better than the existing LMPs. Using these methods may allow more efficient and robust implementations of incremental weak constraint 4D-Var.
翻译:人们日益认识到,模型方程式中的错误不能在四维变异同化(4D-Var)等数据同化方法中被忽视。如果允许的话,可以从观测中提取更多的信息,可以延长时间窗口,最小化过程至少原则上比较容易。4D-Var的弱点估计模型错误,并尽量减少一系列线性最低方位成本功能,这可以通过同源梯梯度法(CG)方法实现;最小化每个成本函数被称为内环。CG需要改进性能的先决条件。在以往的工作中,使用海珊在上一个内部循环中的eigenvalue和eigentors的近似值来构建有限的记忆先决条件(LMPs) 。如果海珊在连续的内环中发生重大变化,LMP的作用可能有限。为了规避这一点,我们建议使用随机化方法低级的乙基值分解法,使用这些近似方法来降低LMP的性能。在以往的内环中,使用有限的内环中,使用有限的存储器(LMPs) 3个随机化方法可以比L更能的递增化方法。