Vertical Federated Learning (VFL) refers to the collaborative training of a model on a dataset where the features of the dataset are split among multiple data owners, while label information is owned by a single data owner. In this paper, we propose a novel method, Multi Vertical Federated Learning (Multi-VFL), to train VFL models when there are multiple data and label owners. Our approach is the first to consider the setting where $D$-data owners (across which features are distributed) and $K$-label owners (across which labels are distributed) exist. This proposed configuration allows different entities to train and learn optimal models without having to share their data. Our framework makes use of split learning and adaptive federated optimizers to solve this problem. For empirical evaluation, we run experiments on the MNIST and FashionMNIST datasets. Our results show that using adaptive optimizers for model aggregation fastens convergence and improves accuracy.
翻译:垂直联邦学习(VFL) 指的是对数据集模型进行协作培训,该模型的数据集特征由多个数据所有者分享,而标签信息则由单一数据所有者拥有。在本文中,我们提议了一种新颖的方法,即多垂直联邦学习(Multi-VFLL),在有多个数据和标签所有者时培训VFL模型。我们的方法是首先考虑存在美元数据所有者和美元标签所有者(分布特征的交叉点)的设置。这一拟议配置允许不同实体培训和学习最佳模型,而不必分享数据。我们的框架利用了分散学习和适应性联合优化来解决这一问题。关于经验评估,我们试验了MNIST和FashionMNIST数据集。我们的结果显示,使用适应性优化工具来进行模型聚合快速聚合并改进准确性。