With the extensive applications of machine learning models, automatic hyperparameter optimization (HPO) has become increasingly important. Motivated by the tuning behaviors of human experts, it is intuitive to leverage auxiliary knowledge from past HPO tasks to accelerate the current HPO task. In this paper, we propose TransBO, a novel two-phase transfer learning framework for HPO, which can deal with the complementary nature among source tasks and dynamics during knowledge aggregation issues simultaneously. This framework extracts and aggregates source and target knowledge jointly and adaptively, where the weights can be learned in a principled manner. The extensive experiments, including static and dynamic transfer learning settings and neural architecture search, demonstrate the superiority of TransBO over the state-of-the-arts.
翻译:随着机器学习模式的广泛应用,自动超参数优化(HPO)已变得越来越重要,在人类专家调整行为的推动下,利用过去HPO任务中的辅助知识加速当前HPO任务是明智的,在本文件中,我们提议TranBO,这是HPO一个新的两阶段转移学习框架,可以同时处理知识集合问题中源任务和动态之间的互补性质。这个框架提取并综合了源和目标知识,可以联合和适应性地学习加权,从而可以有原则地学习加权。广泛的实验,包括静态和动态转移学习环境以及神经结构研究,显示了TransBO优于状态艺术。