Under the federated learning paradigm, a set of nodes can cooperatively train a machine learning model with the help of a centralized server. Such a server is also tasked with assigning a weight to the information received from each node, and often also to drop too-slow nodes from the learning process. Both decisions have major impact on the resulting learning performance, and can interfere with each other in counterintuitive ways. In this paper, we focus on edge networking scenarios and investigate existing and novel approaches to such model-weighting and node-dropping decisions. Leveraging a set of real-world experiments, we find that popular, straightforward decision-making approaches may yield poor performance, and that considering the quality of data in addition to its quantity can substantially improve learning.
翻译:在联合学习范式下,一组节点可以在中央服务器的帮助下合作培训机器学习模式。这样的服务器还负责对从每个节点收到的信息进行权重评估,并常常将学习过程中的偏差节点降低到太低的节点。 这两项决定对由此形成的学习表现有重大影响,并可能以反直觉的方式相互干扰。 在本文中,我们侧重于边缘网络情景,并调查现有和新颖的处理模式加权和节点倾斜决定的方法。 利用一系列现实世界实验,我们发现流行的、直接的决策方法可能会产生不良的绩效,而考虑数据质量以及数据的数量可以大大改善学习。