The increasing demand for intelligent services and privacy protection of mobile and Internet of Things (IoT) devices motivates the wide application of Federated Edge Learning (FEL), in which devices collaboratively train on-device Machine Learning (ML) models without sharing their private data. Limited by device hardware, diverse user behaviors and network infrastructure, the algorithm design of FEL faces challenges related to resources, personalization and network environments. Fortunately, Knowledge Distillation (KD) has been leveraged as an important technique to tackle the above challenges in FEL. In this paper, we investigate the works that KD applies to FEL, discuss the limitations and open problems of existing KD-based FEL approaches, and provide guidance for their real deployment.
翻译:对智能服务以及移动和互联网物质(IoT)装置的隐私保护的需求日益增加,这促使联邦边缘学习(FEL)的广泛应用。 联邦边缘学习(FEL)的功能在不分享私人数据的情况下合作培训设备学习(ML)模型。由于设备硬件、不同用户行为和网络基础设施的限制,FEL的算法设计面临着资源、个性化和网络环境方面的挑战。幸运的是,知识蒸馏(KD)被作为应对FEL的上述挑战的重要技术。在本文件中,我们调查KD适用于FEL的工作,讨论基于KD的FEL方法的局限性和公开问题,并为它们的实际部署提供指导。