We introduce a new stochastic algorithm to locate the index-1 saddle points of a function $V:\mathbb R^d \to \mathbb R$, with $d$ possibly large. This algorithm can be seen as an equivalent of the stochastic gradient descent which is a natural stochastic process to locate local minima. It relies on two ingredients: (i) the concentration properties on index-1 saddle points of the first eigenmodes of the Witten Laplacian (associated with $V$) on $1$-forms and (ii) a probabilistic representation of a partial differential equation involving this differential operator. Numerical examples on simple molecular systems illustrate the efficacy of the proposed approach.
翻译:我们引入了新的随机算法, 以定位函数 $V:\mathbb R ⁇ d\to\mathbb R$ 的指数-1 支架点, 其价值可能很大。 这种算法可以被视为相当于随机梯度梯度下降, 这是一种自然的随机梯度下降过程, 用来定位本地迷你。 它依赖两种成分:(一) Witten Laplacecian 的第一个指数-1 支架点( 与美元挂钩) 的指数-1 支架点的浓度特性, 其形式为$( 美元), 其形式为$( 美元), 以及 (二) 部分差异运算法的概率, 包含这个差异运算者。 简单分子系统的数字示例说明了拟议方法的功效 。