Federated learning (FL) typically relies on synchronous training, which is slow due to stragglers. While asynchronous training handles stragglers efficiently, it does not ensure privacy due to the incompatibility with the secure aggregation protocols. A buffered asynchronous training protocol known as FedBuff has been proposed recently which bridges the gap between synchronous and asynchronous training to mitigate stragglers and to also ensure privacy simultaneously. FedBuff allows the users to send their updates asynchronously while ensuring privacy by storing the updates in a trusted execution environment (TEE) enabled private buffer. TEEs, however, have limited memory which limits the buffer size. Motivated by this limitation, we develop a buffered asynchronous secure aggregation (BASecAgg) protocol that does not rely on TEEs. The conventional secure aggregation protocols cannot be applied in the buffered asynchronous setting since the buffer may have local models corresponding to different rounds and hence the masks that the users use to protect their models may not cancel out. BASecAgg addresses this challenge by carefully designing the masks such that they cancel out even if they correspond to different rounds. Our convergence analysis and experiments show that BASecAgg almost has the same convergence guarantees as FedBuff without relying on TEEs.
翻译:联邦学习( FL) 通常依赖于同步培训, 但由于疏于排挤, 培训速度缓慢, 通常依赖同步培训, 但由于疏于排挤, 通常依赖于同步培训。 虽然非同步培训能有效地处理排挤者, 但由于与安全的聚合协议不兼容, 无法确保隐私。 最近提出了被称为FedBuff 的缓冲非同步培训协议, 以弥合同步和不同步培训之间的差距, 以缓解排挤者, 并确保隐私。 FedBuff 允许用户通过将更新内容储存在可靠的执行环境(TEE), 从而确保隐私。 然而, TEE 由于与安全的聚合协议不兼容, 它并不确保隐私。 受此限制, 我们开发了一种缓冲的、 不依赖TEE 集合( BASE) 协议。 常规的保密整合协议无法在缓冲的缓冲环境下应用, 因为缓冲环境缓冲环境下可能拥有适合不同回合的本地模式, 并因此用户用来保护其模型的口罩不会取消。 但是, ABSecreggs 应对这一挑战, 即使仔细设计了我们的软体统一, 。