Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google's Vizier database, one of the world's largest HPO datasets. Our extensive experiments demonstrate that the OptFormer can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OptFormer also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer.
翻译:先前实验的元学习超参数优化算法(HPO)是一种很有希望的方法,可以提高优化效率,而不是类似分布的客观功能。然而,现有方法仅限于从共享相同多参数的实验中学习。在本文中,我们引入了第一个基于文本的变异器 HPO 框架OptFormer,这是第一个基于文本的变异器 HPO 框架,它为联合学习政策和功能预测提供了一个通用端对端界面,在对来自野生的巨大数据(如谷歌的Vizier数据库,这是世界上最大的HPO数据集之一)进行培训时,可以提供更准确和更好的校准预测。我们的广泛实验表明,OptFormer可以同时模仿至少7种不同的HPO算法,这可以通过功能不确定性估计得到进一步的改进。与Gausian进程相比,OptFormer还学习了超参数响应功能的可靠前分配,从而可以提供更准确和更好的校准预测。这项工作为未来扩展基于变异器模型的一般 HPO优化模型的培训铺平了道路。