Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines. Although this problem received significant attention, prior works often assume the data held by the machines to be homogeneous, which is seldom true in practical settings. Data heterogeneity makes Byzantine ML considerably more challenging, since a Byzantine machine can hardly be distinguished from a non-Byzantine outlier. A few solutions have been proposed to tackle this issue, but these provide suboptimal probabilistic guarantees and fare poorly in practice. This paper closes the theoretical gap, achieving optimality and inducing good empirical results. In fact, we show how to automatically adapt existing solutions for (homogeneous) Byzantine ML to the heterogeneous setting through a powerful mechanism, we call nearest neighbor mixing (NNM), which boosts any standard robust distributed gradient descent variant to yield optimal Byzantine resilience under heterogeneity. We obtain similar guarantees (in expectation) by plugging NNM in the distributed stochastic heavy ball method, a practical substitute to distributed gradient descent. We obtain empirical results that significantly outperform state-of-the-art Byzantine ML solutions.
翻译:拜占庭机器学习(ML)旨在确保分布式学习算法对行为不当(或Byzantine)机器的适应性。虽然这个问题引起了人们的极大关注,但先前的工作往往假定机器持有的数据是同质的,在实际环境中这种情况很少发生。数据异质化使得拜占庭ML具有更大的挑战性,因为拜占庭机器几乎无法与非比占庭外端区分开来。为解决这一问题提出了几种解决办法,但这些解决办法提供了亚于最佳的概率保障和实践中的票价差。本文缩小了理论差距,实现了最佳性,并带来了良好的经验结果。事实上,我们展示了如何通过一个强大的机制,自动调整现有(混合)Byzantine ML的解决方案,以适应多种不同的环境,我们称之为最近的邻居混合(NNNM),它能提升任何标准的稳健的分布式梯子血统变异,以便在异质下产生最佳的Byzantine复原力。我们通过将NMNM(预期)连接在分布式重球式的根基方法中取得类似的保证。我们获得了显著的实证性结果。