We address the relatively unexplored problem of hyper-parameter optimization (HPO) for federated learning (FL-HPO). We introduce Federated Loss suRface Aggregation (FLoRA), the first FL-HPO solution framework that can address use cases of tabular data and gradient boosting training algorithms in addition to stochastic gradient descent/neural networks commonly addressed in the FL literature. The framework enables single-shot FL-HPO, by first identifying a good set of hyper-parameters that are used in a **single** FL training. Thus, it enables FL-HPO solutions with minimal additional communication overhead compared to FL training without HPO. Our empirical evaluation of FLoRA for Gradient Boosted Decision Trees on seven OpenML data sets demonstrates significant model accuracy improvements over the considered baseline, and robustness to increasing number of parties involved in FL-HPO training.
翻译:我们处理超参数优化(HPO)用于联合学习(FL-HPO)的相对未探讨的问题。我们引入了FL-HPO的第一个FL-HPO解决方案框架,即FL-HPO解决方案框架,除了FL文献中常见的Stochacistic 梯度下层/内网之外,还可以解决使用表格数据和梯度增强培训算法的案例。这个框架通过首先确定在**single** FLL培训中使用的一套好的超参数,使得FL-HPO解决方案能够与没有HPO的FL培训相比,得到最低限度的额外通信管理费。我们对7套OpenMLM数据集的FLOR用于梯度引引决定树的经验评估表明,在考虑的基线上取得了显著的模型准确性改进,并有力地增加了FL-HPO培训所涉方的数量。