Federated learning allows multiple participants to conduct joint modeling without disclosing their local data. Vertical federated learning (VFL) handles the situation where participants share the same ID space and different feature spaces. In most VFL frameworks, to protect the security and privacy of the participants' local data, a third party is needed to generate homomorphic encryption key pairs and perform decryption operations. In this way, the third party is granted the right to decrypt information related to model parameters. However, it isn't easy to find such a credible entity in the real world. Existing methods for solving this problem are either communication-intensive or unsuitable for multi-party scenarios. By combining secret sharing and homomorphic encryption, we propose a novel VFL framework without a third party called EFMVFL, which supports flexible expansion to multiple participants with low communication overhead and is applicable to generalized linear models. We give instantiations of our framework under logistic regression and Poisson regression. Theoretical analysis and experiments show that our framework is secure, more efficient, and easy to be extended to multiple participants.
翻译:联邦学习允许多个参与者在不披露其本地数据的情况下进行联合建模。 垂直联合学习( VFL) 处理参与者共享相同身份空间和不同特征空间的情况。 在大多数 VFL 框架中,为了保护参与者当地数据的安全和隐私,需要第三方来生成同质加密密钥对,并进行解密操作。 这样, 第三方就有权解密与模型参数有关的信息。 但是, 在现实世界中找到这样一个可靠的实体并不容易。 解决这一问题的现有方法要么是通信密集型的,要么是不适合多党情景的。 通过将秘密共享和同质加密结合起来,我们提出了一个名为EFMVLF的新的VFL框架,它支持向通信管理低的多个参与者灵活扩展,并适用于通用线性模型。 我们在逻辑回归和Poisson回归下对我们的框架进行回馈。 理论分析和实验表明,我们的框架是安全的,效率更高,并且更容易扩展到多个参与者。