In this paper, we introduce the Ensemble Kalman-Stein Gradient Descent (EnKSGD) class of algorithms. The EnKSGD class of algorithms builds on the ensemble Kalman filter (EnKF) line of work, applying techniques from sequential data assimilation to unconstrained optimization and parameter estimation problems. The essential idea is to exploit the EnKF as a black box (i.e. derivative-free, zeroth order) optimization tool if iterated to convergence. In this paper, we return to the foundations of the EnKF as a sequential data assimilation technique, including its continuous-time and mean-field limits, with the goal of developing faster optimization algorithms suited to noisy black box optimization and inverse problems. The resulting EnKSGD class of algorithms can be designed to both maintain the desirable property of affine-invariance, and employ the well-known backtracking line search. Furthermore, EnKSGD algorithms are designed to not necessitate the subspace restriction property and variance collapse property of previous iterated EnKF approaches to optimization, as both these properties can be undesirable in an optimization context. EnKSGD also generalizes beyond the $L^{2}$ loss, and is thus applicable to a wider class of problems than the standard EnKF. Numerical experiments with both linear and nonlinear least squares problems, as well as maximum likelihood estimation, demonstrate the faster convergence of EnKSGD relative to alternative EnKF approaches to optimization.
翻译:在本文中,我们介绍了Ensemble Kalman-Stein梯度下降(EnKSGD)算法类。EnKSGD算法类建立在集合卡尔曼过滤(EnKF)的工作基础之上,将连续数据同化的技术应用于无约束优化和参数估计问题。其基本思想是,通过将EnKF作为黑盒(即无导数,零阶)优化工具迭代到收敛,从而利用它。在本文中,我们回到将EnKF作为序列数据同化技术的基础,包括其连续时间和均值场极限,目的是开发适用于噪声黑盒优化和反演问题的更快优化算法。所得到的EnKSGD算法类可以被设计成既保持所需的仿射同变性质,同时还可以采用众所周知的回溯直线搜索。此外,EnKSGD算法的设计不需要以前的迭代EnKF优化方法所需的子空间限制特性和方差崩溃特性,因为在优化环境中这两个特性都可能是不必要的。EnKSGD还可推广到$L^{2}$损失之外,因此适用于比标准EnKF更广泛的问题类。对于线性和非线性最小二乘问题以及最大似然估计,数值实验证明了相对于替代的EnKF优化方法,EnKSGD的更快收敛性。