Federated Machine Learning (Fed ML) is a new distributed machine learning technique applied to collaboratively train a global model using clients local data without transmitting it. Nodes only send parameter updates (e.g., weight updates in the case of neural networks), which are fused together by the server to build the global model. By not divulging node data, Fed ML guarantees its confidentiality, a crucial aspect of network security, which enables it to be used in the context of data-sensitive Internet of Things (IoT) and mobile applications, such as smart Geo-location and the smart grid. However, most IoT devices are particularly energy constrained, which raises the need to optimize the Fed ML process for efficient training tasks and optimized power consumption. In this paper, we conduct, to the best of our knowledge, the first Systematic Mapping Study (SMS) on Fed ML optimization techniques for energy-constrained IoT devices. From a total of more than 800 papers, we select 67 that satisfy our criteria and give a structured overview of the field using a set of carefully chosen research questions. Finally, we attempt to provide an analysis of the energy-constrained Fed ML state of the art and try to outline some potential recommendations for the research community.
翻译:联邦机器学习(Fed ML)是一种新的分布式机器学习技术,用于利用客户当地数据合作培训一个全球模型,使用当地数据,但不传播这些数据。节点只发送参数更新(例如神经网络中的重量更新),由服务器结合,以建立全球模型。通过不泄露节点数据,联邦机器学习(Fed ML)保证其保密性,这是网络安全的一个重要方面,使其能够在对数据敏感的物联网(IoT)和移动应用(如智能地理定位和智能电网)的背景下使用。然而,大多数IoT装置都特别受到能源限制,这就需要优化美联储 ML 进程,以高效培训任务和优化电力消耗。在本文中,我们根据我们的知识,就Fed ML节点装置的优化技术进行第一次系统绘图研究。我们从总共800多份文件中选择了67个符合我们标准的文件,并利用一套精心选择的研究问题对实地进行结构化概述。最后,我们试图对Fed Constric公司的潜在研究大纲进行分析。