This paper considers the distributed optimization problem where each node of a peer-to-peer network minimizes a finite sum of objective functions by communicating with its neighboring nodes. In sharp contrast to the existing literature where the fastest distributed algorithms converge either with a global linear or a local superlinear rate, we propose a distributed adaptive Newton (DAN) algorithm with a global quadratic convergence rate. Our key idea lies in the design of a finite-time set-consensus method with Polyak's adaptive stepsize. Moreover, we introduce a low-rank matrix approximation (LA) technique to compress the innovation of Hessian matrix so that each node only needs to transmit message of dimension $\mathcal{O}(p)$ (where $p$ is the dimension of decision vectors) per iteration, which is essentially the same as that of first-order methods. Nevertheless, the resulting DAN-LA converges to an optimal solution with a global superlinear rate. Numerical experiments on logistic regression problems are conducted to validate their advantages over existing methods.
翻译:本文考虑了分布式优化问题, 即同侪网络的每个节点通过与其相邻的节点进行交流, 最大限度地减少客观功能的有限总和。 与现有文献截然不同的是, 最快速分布式算法与全球线性率或本地超线性率相融合, 我们提出一个分布式适应性牛顿(Dan)算法, 与全球四级趋同率。 我们的关键想法在于设计一个有限时间设置- 一致方法, 与Polyak的适应性步骤相融合。 此外, 我们引入了一种低级别矩阵近似技术, 以压缩赫森矩阵的革新, 以便每个节点只需传输维度信息$\ mathcal{O}( p) / p (p) 。 ( $p) 是决定矢量的维度, 基本上与第一阶方法的维度相同。 然而, 由此产生的丹- LA 与全球超线性速率的优化解决方案相融合。 关于物流回归问题的数值实验正在验证其相对于现有方法的优势 。