Federated machine learning is a versatile and flexible tool to utilize distributed data from different sources, especially when communication technology develops rapidly and an unprecedented amount of data could be collected on mobile devices nowadays. Federated learning method exploits not only the data but the computational power of all devices in the network to achieve more efficient model training. Nevertheless, while most traditional federated learning methods work well for homogeneous data and tasks, adapting the method to a different heterogeneous data and task distribution is challenging. This limitation has constrained the applications of federated learning in real-world contexts, especially in healthcare settings. Inspired by the fundamental idea of meta-learning, in this study we propose a new algorithm, which is an integration of federated learning and meta-learning, to tackle this issue. In addition, owing to the advantage of transfer learning for model generalization, we further improve our algorithm by introducing partial parameter sharing. We name this method partial meta-federated learning (PMFL). Finally, we apply the algorithms to two medical datasets. We show that our algorithm could obtain the fastest training speed and achieve the best performance when dealing with heterogeneous medical datasets.
翻译:联邦机器学习是一种灵活多变的工具,可以使用不同来源的分布数据,特别是当通信技术迅速发展,而且现在可以在移动设备上收集数量空前的数据时。联邦学习方法不仅利用网络中所有设备的数据,而且利用网络中所有设备的计算能力,以实现更有效的示范培训。然而,尽管大多数传统的联邦学习方法在统一数据和任务方面运作良好,但根据不同数据和任务分配调整方法具有挑战性。这一限制限制了在现实世界环境中,特别是在医疗保健环境中采用联邦学习方法。受元学习基本理念的启发,我们在本研究中提出了一种新的算法,即将联邦学习和元学习结合起来,以解决这一问题。此外,由于为模型化而转移学习的好处,我们通过采用部分参数共享来进一步改进我们的算法。我们把这一方法称为部分元化学习。最后,我们把算法应用于两个医学数据集。我们表明,我们的算法可以取得最快的培训速度,并在与不同医学数据集打交道时取得最佳性。