Federated learning (FL) has been recognized as a privacy-preserving distributed machine learning paradigm that enables knowledge sharing among various heterogeneous artificial intelligence (AIoT) devices through centralized global model aggregation. FL suffers from model inaccuracy and slow convergence due to the model heterogeneity of the AIoT devices involved. Although various existing methods try to solve the bottleneck of the model heterogeneity problem, most of them improve the accuracy of heterogeneous models in a coarse-grained manner, which makes it still a great challenge to deploy large-scale AIoT devices. To alleviate the negative impact of this problem and take full advantage of the diversity of each heterogeneous model, we propose an efficient framework named HierarchyFL, which uses a small amount of public data for efficient and scalable knowledge across a variety of differently structured models. By using self-distillation and our proposed ensemble library, each hierarchical model can intelligently learn from each other on cloud servers. Experimental results on various well-known datasets show that HierarchyFL can not only maximize the knowledge sharing among various heterogeneous models in large-scale AIoT systems, but also greatly improve the model performance of each involved heterogeneous AIoT device.
翻译:联邦学习(FL)被公认为是一种保护隐私的分布式机器学习模式,它通过集中的全球模型聚合,使各种不同的人工智能(AIoT)装置之间能够分享知识,通过集中的全球模型集成,使各种不同的人工智能(AIoT)装置能够分享知识。FL由于涉及的人工智能装置的模型异质性能,而模型不准确且趋同缓慢。尽管各种现有方法试图解决模型异质性能问题的瓶颈问题,但大多数方法都以粗略的分层方式提高了各种模型的准确性,这使得部署大型AIoT装置仍然是一项巨大的挑战。为了减轻这一问题的负面影响并充分利用每个不同模型的多样性,我们提出了一个名为HierararchyFl(Hierarchy Fl)的有效框架,该框架利用少量公共数据,在各种不同的结构不同的模型中获取高效和可扩展的知识。通过自我蒸馏和我们提议的混合图书馆,每个等级模型都可以在云服务器上相互学习。各种众所周知的数据集的实验结果表明,Hierarchy Fla(Hl)不仅能够最大限度地分享各种不同模型之间的知识,而且还大大改进了各种多式的AITO系统。