In a Federated Learning (FL) setup, a number of devices contribute to the training of a common model. We present a method for selecting the devices that provide updates in order to achieve improved generalization, fast convergence, and better device-level performance. We formulate a min-max optimization problem and decompose it into a primal-dual setup, where the duality gap is used to quantify the device-level performance. Our strategy combines \emph{exploration} of data freshness through a random device selection with \emph{exploitation} through simplified estimates of device contributions. This improves the performance of the trained model both in terms of generalization and personalization. A modified Truncated Monte-Carlo (TMC) method is applied during the exploitation phase to estimate the device's contribution and lower the communication overhead. The experimental results show that the proposed approach has a competitive performance, with lower communication overhead and competitive personalization performance against the baseline schemes.
翻译:在联邦学习(FL)设置中,一些装置有助于培训一个共同模型。我们提出了一个选择提供更新装置的方法,以便改进通用性能、快速趋同性能和更好的设备级性能。我们提出一个最小最大优化问题,将其分解成一个原始的二元结构,使用双重性差距来量化设备级性能。我们的战略通过对设备贡献的简化估计,将随机选择装置的新鲜度与设备贡献的利用情况结合起来。这在一般化和个性化方面都改善了经过培训的模型的性能。在开发阶段,采用了经修改的蒙特卡洛(TMC)方法来估计设备的贡献和降低通信费。实验结果显示,拟议的方法具有竞争性性,通信管理费用较低,与基线计划相比,个人化竞争性业绩也较低。