Fusing deep learning models trained on separately located clients into a global model in a one-shot communication round is a straightforward implementation of Federated Learning. Although current model fusion methods are shown experimentally valid in fusing neural networks with almost identical architectures, they are rarely theoretically analyzed. In this paper, we reveal the phenomenon of neuron disturbing, where neurons from heterogeneous local models interfere with each other mutually. We give detailed explanations from a Bayesian viewpoint combining the data heterogeneity among clients and properties of neural networks. Furthermore, to validate our findings, we propose an experimental method that excludes neuron disturbing and fuses neural networks via adaptively selecting a local model, called AMS, to execute the prediction according to the input. The experiments demonstrate that AMS is more robust in data heterogeneity than general model fusion and ensemble methods. This implies the necessity of considering neural disturbing in model fusion. Besides, AMS is available for fusing models with varying architectures as an experimental algorithm, and we also list several possible extensions of AMS for future work.
翻译:将分别定位的客户所训练的深深学习模型转化为一发通讯回合的全球模型,这是对联邦学习的直截了当的实施。虽然目前的模型融合方法在使用几乎相同的结构的神经网络中被实验性地展示为有效,但很少在理论上加以分析。在本文中,我们揭示了神经不安现象,来自不同地方模型的神经相互干扰。我们从一种巴伊西亚观点中详细解释,将客户的数据异质和神经网络的特性结合起来。此外,为了验证我们的调查结果,我们提出了一种实验方法,通过适应性地选择一种叫做AMS的当地模型,排除神经干扰和神经网络引信,以便根据输入进行预测。这些实验表明,AMS在数据繁杂性方面比一般模型聚合和共用的方法更加强大。这意味着在模型融合中必须考虑神经扰动。此外,AMS可用于将不同结构的模型作为实验算法使用,我们还列出未来工作的几种AMS可能扩展。