Mirror descent (MD) is a powerful first-order optimization technique that subsumes several optimization algorithms including gradient descent (GD). In this work, we develop a semi-definite programming (SDP) framework to analyze the convergence rate of MD in centralized and distributed settings under both strongly convex and non-strongly convex assumptions. We view MD with a dynamical system lens and leverage quadratic constraints (QCs) to provide explicit convergence rates based on Lyapunov stability. For centralized MD under strongly convex assumption, we develop a SDP that certifies exponential convergence rates. We prove that the SDP always has a feasible solution that recovers the optimal GD rate as a special case. We complement our analysis by providing the $O(1/k)$ convergence rate for convex problems. Next, we analyze the convergence of distributed MD and characterize the rate using SDP. To the best of our knowledge, the numerical rate of distributed MD has not been previously reported in the literature. We further prove an $O(1/k)$ convergence rate for distributed MD in the convex setting. Our numerical experiments on strongly convex problems indicate that our framework certifies superior convergence rates compared to the existing rates for distributed GD.
翻译:镜像下沉(MD)是一种强大的一阶优化技术,它包含几种优化算法,包括梯度下降(GD) 。 在这项工作中,我们开发了一个半无限期编程(SDP)框架,以分析集中和分布环境中的MD趋同率,在强烈的锥形和非强烈的锥形假设下,分析集中和分布环境中的MD的趋同率。我们用动态系统镜头和杠杆矩形约束(QCs)来分析分布式MD的趋同率,以提供以Lyapunov稳定性为基础的明确趋同率。对于集中的MD,在强烈的 convex假设下,我们开发了一个SDP,以证明指数指数加速率的指数。我们证明SDP始终有一个可行的解决办法,将最佳的GD比率恢复为特例。我们用“1”来补充我们的分析,为锥形问题提供美元(1/k)的集中和分布式聚合率。我们用分布式MD的指数分析并用SDP来描述比率。我们最熟悉的文献中没有报告分布式MD的数字率。我们进一步证明在 convex设定的分布MD的指数上的数字实验显示我们比较的G的比较的Gx比率。