Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a model with the help of a cloud server. A fundamental challenge of FL is that the clients are often heterogeneous, e.g., they have different computing powers, and thus the clients may send model updates to the server with substantially different delays. Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates. However, like synchronous FL, asynchronous FL is also vulnerable to poisoning attacks, in which malicious clients manipulate the model via poisoning their local data and/or model updates sent to the server. Byzantine-robust FL aims to defend against poisoning attacks. In particular, Byzantine-robust FL can learn an accurate model even if some clients are malicious and have Byzantine behaviors. However, most existing studies on Byzantine-robust FL focused on synchronous FL, leaving asynchronous FL largely unexplored. In this work, we bridge this gap by proposing AFLGuard, a Byzantine-robust asynchronous FL method. We show that, both theoretically and empirically, AFLGuard is robust against various existing and adaptive poisoning attacks (both untargeted and targeted). Moreover, AFLGuard outperforms existing Byzantine-robust asynchronous FL methods.
翻译:联邦学习(FL)是一个新兴的机器学习模式,客户在云端服务器的帮助下共同学习模型。 FL的一个基本挑战在于客户往往是多种多样的,例如,他们拥有不同的计算能力,因此客户可能以不同的延迟程度向服务器发送模型更新。 Asynchrono(FL)的目标是应对这一挑战,让服务器能够在任何客户的模型更新达到时更新模型而不等待其他客户的模型更新。然而,像同步的FL一样,不同步的FL(FL)也容易受到中毒袭击,恶意客户通过毒害其当地数据和/或发送给服务器的模型更新来操纵模型。 Byzantine-robust(FL)旨在防范中毒袭击。 特别是, Byzantine-robust FL(FL)能够学习一个准确的模型,即使有些客户是恶意的,并且有Byzantine的更新行为。然而,关于Byzant-Guard(FL)的现有研究侧重于同步的FL(FL)方法,使得非同步的Flus-L(Flus-Brus)基本上没有爆炸性Frent(Flate-hal-hill)的Flackal-hal-stal)攻击。在这项工作中,我们用这个方法展示了。