We present a principled approach for designing stochastic Newton methods for solving finite sum optimization problems. Our approach has two steps. First, we rewrite the stationarity conditions as a system of nonlinear equations that associates each data point to a new row. Second, we apply a subsampled Newton Raphson method to solve this system of nonlinear equations. By design, methods developed using our approach are incremental, in that they require only a single data point per iteration. Using our approach, we develop a new Stochastic Average Newton (SAN) method, which is incremental and cheap to implement when solving regularized generalized linear models. We show through extensive numerical experiments that SAN requires no knowledge about the problem, neither parameter tuning, while remaining competitive as compared to classical variance reduced gradient methods, such as SAG and SVRG.
翻译:我们提出一种原则方法,用于设计解决有限和优化问题的随机牛顿方法。 我们的方法有两个步骤。 首先, 我们重写固定状态条件, 将其作为一个非线性方程式系统, 将每个数据指向新行。 其次, 我们应用一个子抽样的牛顿 Raphson 方法来解决这个非线性方程式系统。 从设计上看, 使用我们的方法是渐进的, 因为它们只需要一次迭代的单一数据点。 我们使用我们的方法, 我们开发了一种新的Stochatic 平均牛顿( SAN) 方法, 在解决常规化的通用线性模型时, 这种方法是递增和廉价的。 我们通过广泛的数字实验显示, SAN 不需要任何有关这个问题的知识, 也没有参数调整, 同时与传统的降低梯度方法相比, 诸如 SAG 和 SVRG 等具有竞争力。