Federated Learning (FL) enables collaborative training among mutually distrusting parties. Model updates, rather than training data, are concentrated and fused in a central aggregation server. A key security challenge in FL is that an untrustworthy or compromised aggregation process might lead to unforeseeable information leakage. This challenge is especially acute due to recently demonstrated attacks that have reconstructed large fractions of training data from ostensibly "sanitized" model updates. In this paper, we introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture to break down information concentration with regard to a single aggregator. Based on the unique computational properties of model-fusion algorithms, all exchanged model updates in TRUDA are disassembled at the parameter-granularity and re-stitched to random partitions designated for multiple TEE-protected aggregators. Thus, each aggregator only has a fragmentary and shuffled view of model updates and is oblivious to the model architecture. Our new security mechanisms can fundamentally mitigate training reconstruction attacks, while still preserving the final accuracy of trained models and keeping performance overheads low.
翻译:联邦学习(FL) 能够让互不信任的各方之间开展协作培训。 模型更新,而不是培训数据,是集中和整合在一个中央聚合服务器中。 FL 的一个关键安全挑战是,一个不可信或失密的聚合过程可能导致无法预见的信息泄漏。由于最近证明的袭击,将大量培训数据从表面上的“卫生化”模式更新中重建出来,这一挑战尤其严峻。 在本文中,我们引入了TRUDA, 一个新的跨SIL FL系统, 使用一个可信赖和分散的集合结构, 打破单一聚合器的信息集中。 根据模型集成算法的独特计算特性, TRUDA 中所有交换的模型更新都会被分解到参数组合, 并被重新组合到指定用于多个TEE保护聚合器的随机分割中。 因此, 每一个聚合器只能对模型更新有零碎和扭曲的视图, 并且不为模型结构所理解。 我们的新的安全机制可以从根本上减少培训重建攻击, 同时保持经过训练的模型的最终精确性, 并保持低空端。