Due to its distributed nature, federated learning is vulnerable to poisoning attacks, in which malicious clients poison the training process via manipulating their local training data and/or local model updates sent to the cloud server, such that the poisoned global model misclassifies many indiscriminate test inputs or attacker-chosen ones. Existing defenses mainly leverage Byzantine-robust federated learning methods or detect malicious clients. However, these defenses do not have provable security guarantees against poisoning attacks and may be vulnerable to more advanced attacks. In this work, we aim to bridge the gap by proposing FLCert, an ensemble federated learning framework, that is provably secure against poisoning attacks with a bounded number of malicious clients. Our key idea is to divide the clients into groups, learn a global model for each group of clients using any existing federated learning method, and take a majority vote among the global models to classify a test input. Specifically, we consider two methods to group the clients and propose two variants of FLCert correspondingly, i.e., FLCert-P that randomly samples clients in each group, and FLCert-D that divides clients to disjoint groups deterministically. Our extensive experiments on multiple datasets show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients, no matter what poisoning attacks they use.
翻译:由于其分布性质,联谊会的学习容易被中毒袭击,恶意客户通过操纵当地培训数据和/或发给云端服务器的当地模式更新来毒害培训过程,因此,有毒的全球模型错误地分解了许多不分青红皂白的测试投入或攻击者选择的输入。现有的防御手段主要利用Byzantine-robust联合会式的学习方法,或探测恶意客户。然而,这些防御手段没有针对中毒袭击的可证实的安全保障,而且可能更容易受到更高级的攻击。在这项工作中,我们的目标是弥合差距,办法是提出FLCert(一个混合的混合联合学习框架),这个框架可以防止与若干恶意客户一起的中毒袭击。我们的主要想法是将客户分成若干组,利用任何现有的联谊学习方法为每个客户组学习全球模型学习一种全球模型,并在全球模型中以多数投票对测试输入进行分类。具体地,我们考虑用两种方法来组合客户,并相应地提出两种变式FLCertert(一个FLCert-P),即FLCert-P,这个组合随机地抽样的客户在我们的多盘实验组中用我们FLCFLCS-IC(一个不相对比的客户对多个)的组合进行不折中,通过我们的多式的实验,一个对我们的客户进行不折式的实验,对我们的客户进行不折式的实验组进行不折式的实验。