Federated Learning (FL), arising as a privacy-preserving machine learning paradigm, has received notable attention from the public. In each round of synchronous FL training, only a fraction of available clients are chosen to participate, and the selection decision might have a significant effect on the training efficiency, as well as the final model performance. In this paper, we investigate the client selection problem under a volatile context, in which the local training of heterogeneous clients is likely to fail due to various kinds of reasons and in different levels of frequency. {\color{black}Intuitively, too much training failure might potentially reduce the training efficiency, while too much selection on clients with greater stability might introduce bias, thereby resulting in degradation of the training effectiveness. To tackle this tradeoff, we in this paper formulate the client selection problem under joint consideration of effective participation and fairness.} Further, we propose E3CS, a stochastic client selection scheme to solve the problem, and we corroborate its effectiveness by conducting real data-based experiments. According to our experimental results, the proposed selection scheme is able to achieve up to 2x faster convergence to a fixed model accuracy while maintaining the same level of final model accuracy, compared with the state-of-the-art selection schemes.
翻译:联邦学习组织(FL)是一个保护隐私的机器学习模式,它得到了公众的显著关注。在每一轮同步的FL培训中,只有一小部分现有客户被选定参加,而甄选决定可能对培训效率以及最后示范性业绩产生重大影响。在本文件中,我们调查了在动荡的背景下客户选择问题,在这个动荡的背景下,由于各种原因和不同频率的不同,对不同客户的当地培训可能失败。 ~color{black}直觉地说,过多的培训失败可能会降低培训效率,而对于更稳定的客户的过多选择可能会带来偏差,从而导致培训效率的下降。为了解决这一权衡,我们在本文中提出了客户选择问题,共同考虑有效参与和公平性。}此外,我们建议E3CS,即一个随机化客户选择计划,以解决问题,我们通过进行真正的基于数据的实验来证实其有效性。根据我们的实验结果,拟议的选择计划可以达到2x的更快的趋同固定模型精确度,同时保持相同的最后选择率水平。