Byzantine-robust federated learning aims to enable a service provider to learn an accurate global model when a bounded number of clients are malicious. The key idea of existing Byzantine-robust federated learning methods is that the service provider performs statistical analysis among the clients' local model updates and removes suspicious ones, before aggregating them to update the global model. However, malicious clients can still corrupt the global models in these methods via sending carefully crafted local model updates to the service provider. The fundamental reason is that there is no root of trust in existing federated learning methods. In this work, we bridge the gap via proposing FLTrust, a new federated learning method in which the service provider itself bootstraps trust. In particular, the service provider itself collects a clean small training dataset (called root dataset) for the learning task and the service provider maintains a model (called server model) based on it to bootstrap trust. In each iteration, the service provider first assigns a trust score to each local model update from the clients, where a local model update has a lower trust score if its direction deviates more from the direction of the server model update. Then, the service provider normalizes the magnitudes of the local model updates such that they lie in the same hyper-sphere as the server model update in the vector space. Our normalization limits the impact of malicious local model updates with large magnitudes. Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model. Our extensive evaluations on six datasets from different domains show that our FLTrust is secure against both existing attacks and strong adaptive attacks.
翻译:Byzantine- robust 联盟式学习旨在让服务提供商在一定数量客户存在恶意时能够学习准确的全球正常模式。 Byzantine- robust 联盟式学习方法的关键理念是,服务提供商在客户本地模式更新之前进行统计分析,并删除可疑数据,然后将其汇总以更新全球模式。然而,恶意客户仍然可以通过向服务提供商发送精心设计的本地模式更新来腐蚀这些方法中的全球模式。根本原因是,对现有固定学习方法缺乏信任根基。在这项工作中,我们通过推荐一种新的FLTrust-bust 联盟式学习方法来弥补差距。一种新的Fydrust- 联盟式学习方法,即服务提供商自己在本地模式更新之前收集了干净的小培训数据集( 即根数据集), 服务提供商根据这些模式将现有模型( 即服务器模型) 腐蚀全球模式。 服务提供商首先为每个本地模式更新的客户指定一个信任分数, 当地模式更新时, 当地模式更新的比信任分值较低,如果服务提供者在服务器更新方向上更差的系统更新,,则在服务器更新系统服务器更新系统更新系统更新系统更新系统更新,, 更新系统更新系统更新系统更新后,, 将显示系统更新系统更新系统更新系统更新系统更新。