Privacy concerns in federated learning (FL) are commonly addressed with secure aggregation schemes that prevent a central party from observing plaintext client updates. However, most such schemes neglect orthogonal FL research that aims at reducing communication between clients and the aggregator and is instrumental in facilitating cross-device FL with thousands and even millions of (mobile) participants. In particular, quantization techniques can typically reduce client-server communication by a factor of 32x. In this paper, we unite both research directions by introducing an efficient secure aggregation framework based on outsourced multi-party computation (MPC) that supports any linear quantization scheme. Specifically, we design a novel approximate version of an MPC-based secure aggregation protocol with support for multiple stochastic quantization schemes, including ones that utilize the randomized Hadamard transform and Kashin's representation. In our empirical performance evaluation, we show that with no additional overhead for clients and moderate inter-server communication, we achieve similar training accuracy as insecure schemes for standard FL benchmarks. Beyond this, we present an efficient extension to our secure quantized aggregation framework that effectively defends against state-of-the-art untargeted poisoning attacks.
翻译:联邦学习(FL)中的隐私问题通常通过安全的聚合计划来解决,这种计划使中央方无法观察普通客户的最新信息,但是,大多数这类计划忽视正反方FL研究,目的是减少客户和聚合者之间的沟通,有助于与数千甚至数百万(流动)参与者交叉发现FL。特别是,量化技术通常可以减少客户-服务器的通信,减少因数32x。在本文中,我们通过采用基于外包多方计算(MPC)的有效而安全的综合框架,统一研究方向,支持任何线性量化计划。具体地说,我们设计了一个基于MPC的安全聚合协议新近似版本,支持多盘数的量化计划,包括利用随机的哈达玛变形和卡申代表的系统。我们的经验性评估显示,在客户没有额外间接费用和中度服务器通信的情况下,我们的培训准确性与标准FL基准的不安全计划相类似。除此之外,我们高效地扩展了我们的安全四分组合框架,以有效防范国家目标性袭击的无目标性中毒。