Federated Learning (FL) decouples model training from the need for direct access to the data and allows organizations to collaborate with industry partners to reach a satisfying level of performance without sharing vulnerable business information. The performance of a machine learning algorithm is highly sensitive to the choice of its hyperparameters. In an FL setting, hyperparameter optimization poses new challenges. In this work, we investigated the impact of different hyperparameter optimization approaches in an FL system. In an effort to reduce communication costs, a critical bottleneck in FL, we investigated a local hyperparameter optimization approach that -- in contrast to a global hyperparameter optimization approach -- allows every client to have its own hyperparameter configuration. We implemented these approaches based on grid search and Bayesian optimization and evaluated the algorithms on the MNIST data set using an i.i.d. partition and on an Internet of Things (IoT) sensor based industrial data set using a non-i.i.d. partition.
翻译:联邦学习联合会(FL)将模型培训与直接获取数据的必要性脱钩,使各组织能够与行业伙伴合作,在不分享脆弱商业信息的情况下达到令人满意的业绩水平。机器学习算法的性能对其超参数的选择非常敏感。在FL环境下,超参数优化带来了新的挑战。在这项工作中,我们调查了FL系统中不同超参数优化方法的影响。为了减少通信成本(FL中一个严重的瓶颈),我们调查了当地超参数优化方法,这种方法与全球超参数优化方法不同,使每个客户都有自己的超参数配置。我们采用这些基于电网搜索和Bayesian优化的方法,并使用i.d.d.分割和基于非i.i.d.d.分区的事物(IoT)传感器集,对MNIST数据集的算法进行了评估。