Federated Learning (FL) and Split Learning (SL) are privacy-preserving Machine-Learning (ML) techniques that enable training ML models over data distributed among clients without requiring direct access to their raw data. Existing FL and SL approaches work on horizontally or vertically partitioned data and cannot handle sequentially partitioned data where segments of multiple-segment sequential data are distributed across clients. In this paper, we propose a novel federated split learning framework, FedSL, to train models on distributed sequential data. The most common ML models to train on sequential data are Recurrent Neural Networks (RNNs). Since the proposed framework is privacy-preserving, segments of multiple-segment sequential data cannot be shared between clients or between clients and server. To circumvent this limitation, we propose a novel SL approach tailored for RNNs. A RNN is split into sub-networks, and each sub-network is trained on one client containing single segments of multiple-segment training sequences. During local training, the sub-networks on different clients communicate with each other to capture latent dependencies between consecutive segments of multiple-segment sequential data on different clients, but without sharing raw data or complete model parameters. After training local sub-networks with local sequential data segments, all clients send their sub-networks to a federated server where sub-networks are aggregated to generate a global model. The experimental results on simulated and real-world datasets demonstrate that the proposed method successfully trains models on distributed sequential data, while preserving privacy, and outperforms previous FL and centralized learning approaches in terms of achieving higher accuracy in fewer communication rounds.
翻译:联邦学习(FL) 和 Splet Learning (SL) 是保护隐私的机械学习(ML) 技术, 使ML模型能够在无需直接访问原始数据的情况下对客户之间分布的数据进行培训。 现有的 FL 和 SL 在水平或垂直分割数据上使用横向或垂直分割数据, 并且无法在客户之间分配多个部分的连续分割数据。 在本文件中, 我们提议了一个新型的联邦隐私分离学习框架( FedSL), 用于在分布式序列数据上培训模型。 在连续的神经网络网络( RNN) 中, 最常用的ML模型是经常性的神经网络( RNN) 。 由于拟议的框架是隐私保存, 多个组合序列序列数据序列数据的一部分不能在客户之间共享, 或者在连续的连续的序列序列序列中, 我们建议为 RNNE 专门设计了一个新的 SL 方法。 每一个子网络都在一个客户端上培训包含多个模块的单个部分, 在本地培训中, 保存客户的子网络在相互交流, 以完整地显示内部序列序列序列数据, 在连续的服务器上, 将数据序列数据分组数据转换数据转换到本地的服务器上, 将数据转换数据进行所有的数据序列数据进行数据转换到本地数据转换到本地的分类数据转换到本地的服务器上, 。