We propose in this paper New Q-Newton's method. The update rule for the simplest version is $x_{n+1}=x_n-w_n$ where $w_n=pr_{A_n,+}(v_n)-pr_{A_n,-}(v_n)$, with $A_n=\nabla ^2f(x_n)+\delta _n||\nabla f(x_n)||^2.Id$ and $v_n=A_n^{-1}.\nabla f(x_n)$. Here $\delta _n$ is an appropriate real number so that $A_n$ is invertible, and $pr_{A_n,\pm}$ are projections to the vector subspaces generated by eigenvectors of positive (correspondingly negative) eigenvalues of $A_n$. The main result of this paper roughly says that if $f$ is $C^3$ and a sequence $\{x_n\}$, constructed by the New Q-Newton's method from a random initial point $x_0$, {\bf converges}, then the limit point is a critical point and is not a saddle point, and the convergence rate is the same as that of Newton's method. At the end of the paper, we present some issues (saddle points and convergence) one faces when implementing Newton's method and modifications into Deep Neural Networks. In the appendix, we test the good performance of New Q-Newton's method on various benchmark test functions such as Rastrigin, Askley, Rosenbroch and many other, against algorithms such as Newton's method, BFGS, Adaptive Cubic Regularization, Random damping Newton's method and Inertial Newton's method, as well as Unbounded Two-way Backtracking Gradient Descent. The experiments demonstrate in particular that the assumption that $f$ is $C^3$ is necessary for some conclusions in the main theoretical results.
翻译:我们在本文中提出 New Q- Newton 的方法。 最简单版本的更新规则是 $x @ +1 @ x_ x_ n- w_ n$, 其中$w_ n=pr _ A_ n, ⁇ ( v_ n)- pr _ A_ n, (v_ n) $, 其中$A_ n _ nabla =2f( x_ n) delta _ nóbla f( x_ n) 2. Id$ 和 $v_ n=A_ n_ }.\ ndal_ lider_ lidalgalligal =$。 这里, $delta_ ndalta 点是合适的真实数字, 美元是无法被忽略的, $pr_ la_ la_ a, pm} 美元是向矢量空间所作的预测, 以正( orprespond) 表示 $. degeneral_ n$ 。 。 。 和 max max max max max max max max max max