Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance. Concerns about the private feature and label leakage in both the training and inference phases of VFL have drawn wide research attention. In this paper, we propose a general privacy-preserving vertical federated deep learning framework called FedPass, which leverages adaptive obfuscation to protect the feature and label simultaneously. Strong privacy-preserving capabilities about private features and labels are theoretically proved (in Theorems 1 and 2). Extensive experimental result s with different datasets and network architectures also justify the superiority of FedPass against existing methods in light of its near-optimal trade-off between privacy and model performance.
翻译:纵向联合学习(VFL)允许一个带有标签特征的活跃方利用被动方的辅助功能来改进模型性能;对VFL培训和推论阶段的私人特征和标签渗漏的担忧引起了广泛的研究关注;在本文件中,我们提议了一个一般的隐私保护纵向联合深层次学习框架,即FedPass,利用适应性模糊来同时保护特征和标签;关于私人特征和标签的强大隐私保护能力在理论上得到了证明(在Theorems 1和2)。 具有不同数据集和网络结构的广泛实验结果也证明,鉴于FedPass在隐私和模型性能之间接近最佳的权衡,它优于现有方法。