In recent years, federated learning (FL) has been widely applied for supporting decentralized collaborative learning scenarios. Among existing FL models, federated logistic regression (FLR) is a widely used statistic model and has been used in various industries. To ensure data security and user privacy, FLR leverages homomorphic encryption (HE) to protect the exchanged data among different collaborative parties. However, HE introduces significant computational overhead (i.e., the cost of data encryption/decryption and calculation over encrypted data), which eventually becomes the performance bottleneck of the whole system. In this paper, we propose HAFLO, a GPU-based solution to improve the performance of FLR. The core idea of HAFLO is to summarize a set of performance-critical homomorphic operators (HO) used by FLR and accelerate the execution of these operators through a joint optimization of storage, IO, and computation. The preliminary results show that our acceleration on FATE, a popular FL framework, achieves a 49.9$\times$ speedup for heterogeneous LR and 88.4$\times$ for homogeneous LR.
翻译:近年来,联谊学习(FL)被广泛用于支持分散协作学习方案,在现有FL模型中,联谊后勤回归(FLR)是一个广泛使用的统计模型,并已用于各行业。为了确保数据安全和用户隐私,FLR利用同质加密(HHE)来保护不同合作方之间交换的数据。然而,HE引入了重要的计算间接费用(即数据加密/解密和加密数据计算的成本),最终成为整个系统的性能瓶颈。在本文件中,我们提出了基于GAFLO的GAFLO(基于GPU的解决方案),以改善FLR的性能。HAFLO的核心思想是概述FLR使用的一套对性有批评性的同质操作器(HO),并通过联合优化储存、IO和计算加速执行这些操作器。初步结果显示,我们关于FATE(流行的FL框架)的加速速度为49.9美元,对于异族LR的速度为88.4美元,对于同性LR为88.4美元。