The conventional machine learning (ML) and deep learning approaches need to share customers' sensitive information with an external credit bureau to generate a prediction model that opens the door to privacy leakage. This leakage risk makes financial companies face an enormous challenge in their cooperation. Federated learning is a machine learning setting that can protect data privacy, but the high communication cost is often the bottleneck of the federated systems, especially for large neural networks. Limiting the number and size of communications is necessary for the practical training of large neural structures. Gradient sparsification has received increasing attention as a method to reduce communication cost, which only updates significant gradients and accumulates insignificant gradients locally. However, the secure aggregation framework cannot directly use gradient sparsification. This article proposes two sparsification methods to reduce communication cost in federated learning. One is a time-varying hierarchical sparsification method for model parameter update, which solves the problem of maintaining model accuracy after high ratio sparsity. It can significantly reduce the cost of a single communication. The other is to apply the sparsification method to the secure aggregation framework. We sparse the encryption mask matrix to reduce the cost of communication while protecting privacy. Experiments show that under different Non-IID experiment settings, our method can reduce the upload communication cost to about 2.9% to 18.9% of the conventional federated learning algorithm when the sparse rate is 0.01.
翻译:传统机器学习(ML)和深层次学习方法需要与外部信用局分享客户敏感信息,以生成一个预测模型,打开隐私泄漏的大门。这种渗漏风险使金融公司在合作中面临巨大的挑战。联邦学习是一个能够保护数据隐私的机器学习环境,但通信成本高往往是联邦系统瓶颈,特别是大型神经网络的瓶颈。限制通信的数量和规模对于大型神经结构的实际培训是必要的。大规模透析作为一种降低通信成本的方法越来越受到越来越多的关注,因为通信成本只能更新重要的梯度并积累本地的微小梯度。然而,安全汇总框架不能直接使用梯度宽度放大法。这篇文章提议采用两种宽度化方法来降低联邦学习过程中的通信成本。一是模型参数更新的等级松散法,它解决了在高比例松散后保持模型准确性的问题。它可以大幅降低单一通信的成本。另一个是将吸附方法应用于安全的聚合框架。我们将加密面面面罩矩阵的基质化矩阵进行稀释,以降低常规通信成本。</s>