Federated learning has emerged as a privacy-preserving machine learning approach where multiple parties can train a single model without sharing their raw training data. Federated learning typically requires the utilization of multi-party computation techniques to provide strong privacy guarantees by ensuring that an untrusted or curious aggregator cannot obtain isolated replies from parties involved in the training process, thereby preventing potential inference attacks. Until recently, it was thought that some of these secure aggregation techniques were sufficient to fully protect against inference attacks coming from a curious aggregator. However, recent research has demonstrated that a curious aggregator can successfully launch a disaggregation attack to learn information about model updates of a target party. This paper presents DeTrust-FL, an efficient privacy-preserving federated learning framework for addressing the lack of transparency that enables isolation attacks, such as disaggregation attacks, during secure aggregation by assuring that parties' model updates are included in the aggregated model in a private and secure manner. DeTrust-FL proposes a decentralized trust consensus mechanism and incorporates a recently proposed decentralized functional encryption (FE) scheme in which all parties agree on a participation matrix before collaboratively generating decryption key fragments, thereby gaining control and trust over the secure aggregation process in a decentralized setting. Our experimental evaluation demonstrates that DeTrust-FL outperforms state-of-the-art FE-based secure multi-party aggregation solutions in terms of training time and reduces the volume of data transferred. In contrast to existing approaches, this is achieved without creating any trust dependency on external trusted entities.
翻译:联邦学习通常需要利用多方计算技术,以提供强有力的隐私保障,确保不可信或好奇的保密聚合器无法从参与培训进程的各方获得孤立的答复,从而防止潜在的推断攻击;直到最近,人们认为,其中一些安全汇总技术足以充分防范来自一个好奇的聚合器的推断攻击;然而,最近的研究表明,一个好奇的聚合器能够成功启动一个分类攻击,以了解目标方的模型更新信息。本文介绍了DeTrust-FL,一个高效的保密联合学习框架,用以解决缺乏透明度的问题,使孤立攻击成为可能,例如分解攻击,在安全汇总过程中,确保以私人和安全的方式将缔约方的模型更新纳入综合模型中。 De Trust-FLI提出一个分散式的信任共识机制,并纳入最近提出的分散式功能加密(FE)计划,其中,所有当事方在不通过协作创建离散式的外部信任程序之前就参与矩阵达成一致。