We consider a distributed convex optimization problem in a network which is time-varying and not always strongly connected. The local cost function of each node is affected by some stochastic process. All nodes of the network collaborate to minimize the average of their local cost functions. The major challenge of our work is that the gradient of cost functions is supposed to be unavailable and has to be estimated only based on the numerical observation of cost functions. Such problem is known as zeroth-order stochastic convex optimization (ZOSCO). In this paper we take a first step towards the distributed optimization problem with a ZOSCO setting. The proposed algorithm contains two basic steps at each iteration: i) each unit updates a local variable according to a random perturbation based single point gradient estimator of its own local cost function; ii) each unit exchange its local variable with its direct neighbors and then perform a weighted average. In the situation where the cost function is smooth and strongly convex, our attainable optimization error is $O(T^{-1/2})$ after $T$ iterations. This result is interesting as $O(T^{-1/2})$ is the optimal convergence rate in the ZOSCO problem. We have also investigate the optimization error with the general Lipschitz convex function, the result is $O(T^{-1/4})$.
翻译:我们认为,在一个时间变化且不总是紧密连接的网络中,存在着分布式 convex优化的问题。 每个节点的本地成本功能受到某些随机过程的影响。 网络的所有节点都协作尽量减少其本地成本功能的平均值。 我们工作的主要挑战是, 成本函数的梯度应该是不存在的, 并且只能根据成本函数的数值观察来估算。 这样的问题被称为零顺序相交调调控调控调控优化( ZOSCO ) 。 在本文中, 我们采取的第一个步骤是使用ZOSCO 设置的分布式优化问题。 提议的计算法包含两个基本步骤 : i) 每个单位都根据基于随机过错的单点梯度计算其本地成本功能更新本地变量; ii 每一个单位将其本地变量与其直接邻居交换, 然后执行一个加权平均值。 在成本函数平滑和强烈连接的情况下, 我们可实现的优化误差是$T$(T ⁇ -1/2) 美元之后的分布优化问题。 这个结果也与 $O 4 4 4 4 4 4 4 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5