The problem of distributed optimization requires a group of networked agents to compute a parameter that minimizes the average of their local cost functions. While there are a variety of distributed optimization algorithms that can solve this problem, they are typically vulnerable to "Byzantine" agents that do not follow the algorithm. Recent attempts to address this issue focus on single dimensional functions, or assume certain statistical properties of the functions at the agents. In this paper, we provide two resilient, scalable, distributed optimization algorithms for multi-dimensional functions. Our schemes involve two filters, (1) a distance-based filter and (2) a min-max filter, which each remove neighborhood states that are extreme (defined precisely in our algorithms) at each iteration. We show that these algorithms can mitigate the impact of up to F (unknown) Byzantine agents in the neighborhood of each regular agent. In particular, we show that if the network topology satisfies certain conditions, all of the regular agents' states are guaranteed to converge to a bounded region that contains the minimizer of the average of the regular agents' functions.
翻译:分布式优化问题需要一组网络化代理商来计算一个参数,以最小化其本地成本功能的平均值。 虽然有各种各样的分布式优化算法可以解决这个问题, 但是它们通常容易受到不遵循算法的“ Byzantine” 代理商的伤害。 最近试图解决这个问题的尝试侧重于单维函数, 或者承担代理商函数的某些统计属性。 在本文中, 我们为多维函数提供两种有弹性的、 可扩展的分布式优化算法。 我们的计划涉及两个过滤器, (1) 远程过滤器和 (2) 微麦式过滤器, 每一个过滤器都清除每个迭代中极端( 我们的算法定义精确) 的周边状态。 我们显示这些算法可以减轻每个常规代理商附近F( 未知的) Byzantine 代理商的影响 。 特别是, 我们显示, 如果网络的表面符合某些条件, 我们所有常规代理商的州都保证会聚集到一个包含正常代理商职能平均最小值的封闭区域 。