Neural architecture search (NAS) and hyperparameter optimization (HPO) make deep learning accessible to non-experts by automatically finding the architecture of the deep neural network to use and tuning the hyperparameters of the used training pipeline. While both NAS and HPO have been studied extensively in recent years, NAS methods typically assume fixed hyperparameters and vice versa - there exists little work on joint NAS + HPO. Furthermore, NAS has recently often been framed as a multi-objective optimization problem, in order to take, e.g., resource requirements into account. In this paper, we propose a set of methods that extend current approaches to jointly optimize neural architectures and hyperparameters with respect to multiple objectives. We hope that these methods will serve as simple baselines for future research on multi-objective joint NAS + HPO. To facilitate this, all our code is available at https://github.com/automl/multi-obj-baselines.
翻译:神经结构搜索(NAS)和超光谱优化(HPO)使非专家能够通过自动找到深神经网络的结构来使用和调整废旧训练管道的超参数,从而深入学习。虽然近年来对神经结构和超参数都进行了广泛研究,但NAS方法和HPO方法通常都采用固定的超参数,反之亦然――在联合NAS+HPO方面几乎没有什么工作。此外,最近,NAS常常被设计成一个多目标优化问题,以便考虑资源需求等。在本文件中,我们提出了一套方法,扩大当前在多个目标方面联合优化神经结构和超参数的方法。我们希望,这些方法将成为今后对多目标联合NAS+HPO进行研究的简单基线。为了便利这项工作,我们的所有代码都可在https://github.com/autooml/Mul-obj-bj-baselines查阅。