Federated Learning aims to train distributed deep models without sharing the raw data with the centralized server. Similarly, in distributed inference of neural networks, by partitioning the network and distributing it across several physical nodes, activations and gradients are exchanged between physical nodes, rather than raw data. Nevertheless, when a neural network is partitioned and distributed among physical nodes, failure of physical nodes causes the failure of the neural units that are placed on those nodes, which results in a significant performance drop. Current approaches focus on resiliency of training in distributed neural networks. However, resiliency of inference in distributed neural networks is less explored. We introduce ResiliNet, a scheme for making inference in distributed neural networks resilient to physical node failures. ResiliNet combines two concepts to provide resiliency: skip hyperconnection, a concept for skipping nodes in distributed neural networks similar to skip connection in resnets, and a novel technique called failout, which is introduced in this paper. Failout simulates physical node failure conditions during training using dropout, and is specifically designed to improve the resiliency of distributed neural networks. The results of the experiments and ablation studies using three datasets confirm the ability of ResiliNet to provide inference resiliency for distributed neural networks.
翻译:同样,在分布式神经网络的分布式推断中,通过将网络隔开并分布在多个物理节点,在物理节点之间而不是在原始数据之间交换激活和梯度。然而,当神经网络被分割并在物理节点之间分布时,物理节点的故障导致这些节点上安装的神经单元的故障,从而导致显著的性能下降。当前的做法侧重于分布式神经网络中培训的恢复性。然而,分布式神经网络中的推断性能探索较少。我们引入了ResiliNet,这是一个在分布式神经网络中对物理节点故障有弹性的推断性计划。ResiliNet结合了两个概念来提供恢复性能:跳过超链接,在分布式神经网络中跳过节点的概念,类似于跳过网络连接,以及本文件中引入的被称为故障的新型技术。在培训中,在分布式神经网络中模拟了物理不合格状态,并专门设计了使用分布式神经网络的测试结果,以确认分布式神经网络的恢复性能。