Hyperparameter optimization (HPO) is crucial for machine learning algorithms to achieve satisfactory performance, whose progress has been boosted by related benchmarks. Nonetheless, existing efforts in benchmarking all focus on HPO for traditional centralized learning while ignoring federated learning (FL), a promising paradigm for collaboratively learning models from dispersed data. In this paper, we first identify some uniqueness of HPO for FL algorithms from various aspects. Due to this uniqueness, existing HPO benchmarks no longer satisfy the need to compare HPO methods in the FL setting. To facilitate the research of HPO in the FL setting, we propose and implement a benchmark suite FedHPO-B that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions. We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods. We open-source FedHPO-B at https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOB and will maintain it actively.
翻译:超光谱优化(HPO)对于机器学习算法达到令人满意的业绩至关重要,相关基准推动了这种算法的进展;然而,目前为将所有工作基准都以HPO为基准而进行,重点是传统集中学习,而忽视联邦学习(FL),这是从分散数据中合作学习模式的一个很有希望的模式;在本文中,我们首先确定HPO对于不同方面FL算法的某种独特性;由于这种独特性,现有的HPO基准不再满足在FL设置中比较HPO方法的需要;为便利HPO在FL设置中的研究,我们提议并实施一项基准套FDHPO-B,其中包括全面的FL任务,使高效的职能评价得以进行,并便利持续扩展;我们还根据FEDHPO-B进行广泛的实验,以确定HPO方法的若干方法的基准。我们在https://github.com/alibaba/FedScope/tree/mats/bintmark/bintmarint/FedHPOBOB),并将积极维持。