In this work, we consider a distributed online convex optimization problem, with time-varying (potentially adversarial) constraints. A set of nodes, jointly aim to minimize a global objective function, which is the sum of local convex functions. The objective and constraint functions are revealed locally to the nodes, at each time, after taking an action. Naturally, the constraints cannot be instantaneously satisfied. Therefore, we reformulate the problem to satisfy these constraints in the long term. To this end, we propose a distributed primal-dual mirror descent based approach, in which the primal and dual updates are carried out locally at all the nodes. This is followed by sharing and mixing of the primal variables by the local nodes via communication with the immediate neighbors. To quantify the performance of the proposed algorithm, we utilize the challenging, but more realistic metrics of dynamic regret and fit. Dynamic regret measures the cumulative loss incurred by the algorithm, compared to the best dynamic strategy. On the other hand, fit measures the long term cumulative constraint violations. Without assuming the restrictive Slater's conditions, we show that the proposed algorithm achieves sublinear regret and fit under mild, commonly used assumptions.
翻译:在这项工作中,我们考虑到一个分布式的在线曲线优化问题,有时间差异(潜在对抗性)限制。一组节点,共同目标是最大限度地减少全球目标功能,即本地曲线功能的总和。目标和制约功能在每次采取行动后都在当地向节点披露。自然,限制无法立即得到满足。因此,我们重新划分问题,以便长期满足这些限制。为此,我们提议一种分布式的原始双面镜底下降法,即原始和双重更新在本地所有节点进行。随后,通过与近邻的沟通,由本地节点分享和混合原始变量。为了量化拟议的算法的性能,我们使用了具有挑战性但更现实的动态遗憾和适应度的衡量标准。动态遗憾衡量算出算法与最佳动态战略相比的累积损失。另一方面,我们提出一个适当的衡量长期累积抑制性障碍。在不假定限制性的Slater条件的情况下,我们展示了拟议的算法实现了亚线遗憾,并且符合通常使用的温和假设。