Federated learning (FL) enables clients to collaborate with a server to train a machine learning model. To ensure privacy, the server performs secure aggregation of updates from the clients. Unfortunately, this prevents verification of the well-formedness (integrity) of the updates as the updates are masked. Consequently, malformed updates designed to poison the model can be injected without detection. In this paper, we formalize the problem of ensuring \textit{both} update privacy and integrity in FL and present a new system, \textsf{EIFFeL}, that enables secure aggregation of \textit{verified} updates. \textsf{EIFFeL} is a general framework that can enforce \textit{arbitrary} integrity checks and remove malformed updates from the aggregate, without violating privacy. Our empirical evaluation demonstrates the practicality of \textsf{EIFFeL}. For instance, with $100$ clients and $10\%$ poisoning, \textsf{EIFFeL} can train an MNIST classification model to the same accuracy as that of a non-poisoned federated learner in just $2.4s$ per iteration.
翻译:联邦学习( FL) 使客户能够与服务器合作培训机器学习模式。 为了确保隐私, 服务器可以安全地汇总客户提供的最新消息。 不幸的是, 这阻碍了对更新内容的完善性( 完整性) 进行验证, 因为更新内容被遮掩。 因此, 用于毒害模型的错误更新可以不经检测就注入。 在本文中, 我们正式确定确保\ textit{ both} 更新FL的隐私和完整性的问题, 并推出一个新的系统, 即\ textsf{ EIFFL}, 使\ textit{ 经核实} 更新能够安全地汇总。\ textsf{ EIFFeL} 是一个总框架, 可以强制进行\ textsf{ 任意} 完整性检查, 并在不侵犯隐私的情况下将错误的更新内容从汇总中移除。 我们的经验评估显示了\ textf{ EIFFL} 的实用性。 例如, 客户 $ 100 美元 和 $ 10 美元 中毒 美元,,\ textsf{EIFFeL} 可以将 MNISTIS 分类模式培训一个和 仅用不支持的学习软件的精确度相同的模型。