We present RHODE, a novel system that enables privacy-preserving training of and prediction on Recurrent Neural Networks (RNNs) in a federated learning setting by relying on multiparty homomorphic encryption (MHE). RHODE preserves the confidentiality of the training data, the model, and the prediction data; and it mitigates the federated learning attacks that target the gradients under a passive-adversary threat model. We propose a novel packing scheme, multi-dimensional packing, for a better utilization of Single Instruction, Multiple Data (SIMD) operations under encryption. With multi-dimensional packing, RHODE enables the efficient processing, in parallel, of a batch of samples. To avoid the exploding gradients problem, we also provide several clip-by-value approximations for enabling gradient clipping under encryption. We experimentally show that the model performance with RHODE remains similar to non-secure solutions both for homogeneous and heterogeneous data distribution among the data holders. Our experimental evaluation shows that RHODE scales linearly with the number of data holders and the number of timesteps, sub-linearly and sub-quadratically with the number of features and the number of hidden units of RNNs, respectively. To the best of our knowledge, RHODE is the first system that provides the building blocks for the training of RNNs and its variants, under encryption in a federated learning setting.
翻译:我们推出RHOD,这是一个新系统,它通过依赖多党同质加密(MHE),在一个联合学习环境中,使经常神经网络(RHOD)的隐私保护培训和预测能够进行。 RHOD通过依靠多党同质加密(MHE),维护培训数据、模型和预测数据的保密性能;并减轻在被动反向威胁模式下针对梯度的联结学习攻击;我们提出了一个新的包装方案,即多维包装,以更好地利用正在加密的单一指令、多数据操作。在多维包装中,RHOD能够同时有效地处理一批样本。为了避免急剧的梯度问题,我们还提供若干按微量分类的近似点,以在加密下进行梯度剪切。我们实验性地表明,RHODOD的模型性能与在数据持有者之间统一和混合数据分配的非安全解决方案相似。我们的实验性评估显示,RHODER与数据拥有者的数量和时间档、次线和次线性能处理的一组样本处理。在RNFAS系统中,其最佳的学习模式和模式中提供其最隐性、最易的版本。