Federated Learning (FL), arising as a novel secure learning paradigm, has received notable attention from the public. In each round of synchronous FL training, only a fraction of available clients are chosen to participate and the selection decision might have a significant effect on the training efficiency, as well as the final model performance. In this paper, we investigate the client selection problem under a volatile context, in which the local training of heterogeneous clients is likely to fail due to various kinds of reasons and in different levels of frequency. Intuitively, too much training failure might potentially reduce the training efficiency, while too much selection on clients with greater stability might introduce bias, and thereby result in degradation of the training effectiveness. To tackle this tradeoff, we in this paper formulate the client selection problem under joint consideration of effective participation and fairness. Further, we propose E3CS, a stochastic client selection scheme on the basis of an adversarial bandit solution, and we further corroborate its effectiveness by conducting real data-based experiments. According to the experimental results, our proposed selection scheme is able to achieve up to 2x faster convergence to a fixed model accuracy while maintaining the same level of final model accuracy, in comparison to the vanilla selection scheme in FL.
翻译:联邦学习组织(FL)是一个新的安全学习模式,受到公众的显著关注。在每一轮同步FL培训中,只有一小部分的客户被挑选参加,甄选决定可能对培训效率和最后示范性业绩产生重大影响。在本文件中,我们调查在动荡的背景下客户选择问题,不同客户的当地培训可能因各种原因和不同频率而失败。自然,过多的培训失败可能会降低培训效率,而对于更稳定的客户的过多选择可能会带来偏差,从而导致培训效力的下降。为了解决这一权衡,我们在本文件中在共同考虑有效参与和公平性的情况下提出了客户选择问题。此外,我们提议E3CS,一个基于对抗性匪帮解决办法的随机化客户选择计划,我们通过进行真正的基于数据的实验进一步证实其有效性。根据实验结果,我们提议的选择计划可以更快地达到2x与固定模型精确度的趋同,同时保持相同的最终模型精确度,比范拉选择计划。