Differentially Private Federated Learning (DPFL) is an emerging field with many applications. Gradient averaging based DPFL methods require costly communication rounds and hardly work with large-capacity models, due to the explicit dimension dependence in its added noise. In this work, inspired by knowledge transfer non-federated privacy learning from Papernot et al.(2017; 2018), we design two new DPFL schemes, by voting among the data labels returned from each local model, instead of averaging the gradients, which avoids the dimension dependence and significantly reduces the communication cost. Theoretically, by applying secure multi-party computation, we could exponentially amplify the (data-dependent) privacy guarantees when the margin of the voting scores are large. Extensive experiments show that our approaches significantly improve the privacy-utility trade-off over the state-of-the-arts in DPFL.
翻译:不同私立联邦学习(DPFL)是一个新兴领域,有许多应用。基于逐步平均法的DPFL方法需要花费昂贵的通信周期,而且由于在增加的噪音中明显具有维度依赖性,几乎无法与大容量模型合作。在这项工作中,在Papernot等人(2017年;2018年)的知识传输非联邦隐私学习启发下,我们设计了两种新的DPFL计划,办法是在从每个地方模型返回的数据标签中进行投票,而不是平均梯度,以避免尺寸依赖,并大幅降低通信成本。理论上,通过采用安全的多党计算,我们可以在选票分数的幅度很大时大幅扩大(数据依赖的)隐私保障。广泛的实验表明,我们的做法大大改善了DPFL的隐私使用权交易。