Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model while keeping the training data decentralized. In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side. However, directly averaging model parameters is only possible if all models have the same structure and size, which could be a restrictive constraint in many scenarios. In this work we investigate more powerful and more flexible aggregation schemes for FL. Specifically, we propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients. This knowledge distillation technique mitigates privacy risk and cost to the same extent as the baseline FL algorithms, but allows flexible aggregation over heterogeneous client models that can differ e.g. in size, numerical precision or structure. We show in extensive empirical experiments on various CV/NLP datasets (CIFAR-10/100, ImageNet, AG News, SST2) and settings (heterogeneous models/data) that the server model can be trained much faster, requiring fewer communication rounds than any existing FL technique so far.
翻译:联邦学习(FL)是一个机器学习环境,许多装置合作培训机器学习模式,同时保持培训数据分散。在大多数现行培训计划中,中央模式通过平均服务器模型参数和客户方更新参数来完善。然而,只有所有模型的结构和大小都相同,这在很多情况下都是一种限制性限制,才有可能直接平均模型参数。在这项工作中,我们为FL调查更强大、更灵活的汇总计划。具体地说,我们提议对模型聚合进行混合,即通过客户提供模型产出的无标签数据对中央分类员进行培训。这种知识蒸馏技术可以降低隐私风险和成本,其程度与基线FL算法相同,但允许灵活地汇总多种客户模型,这些模型在规模、数字精确度或结构方面可能有所不同。我们从广泛的实验中可以看出,各种CV/NLP数据集(CIFAR-10-100、图像网络、AG News、SST2)和各种设置(电子化模型/数据)的实验中,因此服务器模型的训练速度要大大低于现有的FL。