Vertical federated learning (VFL) enables a service provider (i.e., active party) who owns labeled features to collaborate with passive parties who possess auxiliary features to improve model performance. Existing VFL approaches, however, have two major vulnerabilities when passive parties unexpectedly quit in the deployment phase of VFL - severe performance degradation and intellectual property (IP) leakage of the active party's labels. In this paper, we propose \textbf{Party-wise Dropout} to improve the VFL model's robustness against the unexpected exit of passive parties and a defense method called \textbf{DIMIP} to protect the active party's IP in the deployment phase. We evaluate our proposed methods on multiple datasets against different inference attacks. The results show that Party-wise Dropout effectively maintains model performance after the passive party quits, and DIMIP successfully disguises label information from the passive party's feature extractor, thereby mitigating IP leakage.
翻译:鲁棒的、能保护知识产权的垂直联邦学习防范参与方意外退出
垂直联邦学习(VFL)使得持有标记特征的服务提供方(即主动方)能够与持有辅助特征的被动方进行合作以提高模型表现。然而,现有的VFL方法,在被动方在部署阶段意外退出时存在两个主要的弱点:严重的性能下降和主动方标记的知识产权(IP)泄漏。本文提出了“Party-wise Dropout”来提高VFL模型对被动方意外退出的鲁棒性,以及名为“DIMIP”的防御方法来保护主动方在部署阶段的知识产权。我们在多个数据集上评估了我们提出的方法,针对不同的推断攻击进行了测试。结果显示,“Party-wise Dropout”能够有效地保持模型在被动方退出后的性能,而“DIMIP”成功地掩盖了被动方特征提取器中的标记信息,从而减轻了IP泄漏。