Federated Learning (FL) is a variant of distributed learning where edge devices collaborate to learn a model without sharing their data with the central server or each other. We refer to the process of training multiple independent models simultaneously in a federated setting using a common pool of clients as multi-model FL. In this work, we propose two variants of the popular FedAvg algorithm for multi-model FL, with provable convergence guarantees. We further show that for the same amount of computation, multi-model FL can have better performance than training each model separately. We supplement our theoretical results with experiments in strongly convex, convex, and non-convex settings.
翻译:联邦学习(FL)是分布式学习的一种变体,边际设备可以合作学习模型,而不必与中央服务器或彼此共享数据。我们指的是利用多模范FL等共同客户群在联合环境下同时培训多个独立模型的过程。在这项工作中,我们为多模范FL提出了两种流行的FedAvg算法的变体,配有可辨识的趋同保证。我们进一步显示,对于同样数量的计算,多模范FL的性能可以优于分别培训每个模型。我们用在强烈的 convex、 convex和非convex设置中的实验来补充我们的理论结果。