The task of hyper-parameter optimization (HPO) is burdened with heavy computational costs due to the intractability of optimizing both a model's weights and its hyper-parameters simultaneously. In this work, we introduce a new class of HPO method and explore how the low-rank factorization of the convolutional weights of intermediate layers of a convolutional neural network can be used to define an analytical response surface for optimizing hyper-parameters, using only training data. We quantify how this surface behaves as a surrogate to model performance and can be solved using a trust-region search algorithm, which we call autoHyper. The algorithm outperforms state-of-the-art such as Bayesian Optimization and generalizes across model, optimizer, and dataset selection. The PyTorch codes can be found in \url{https://github.com/MathieuTuli/autoHyper}.
翻译:超参数优化( HPO) 的任务由于同时优化模型重量和超参数的吸引力而承受了沉重的计算成本。 在这项工作中, 我们引入了一种新的 HPO 方法, 并探索如何使用进化神经网络中间层的进化权重的低位因数化来定义优化超参数的分析反应面面, 仅使用培训数据。 我们量化了该表面如何作为模拟性能的替代工具, 并且可以用信任区域搜索算法来解决, 我们称之为自动Hyper。 算法比Bayesian Optimization 等最新工艺型、 优化和 通用模型、 优化和 数据集选择 。 PyTorch 代码可以在\url{ https://github.com/MathieuTuli/autoHyper} 中找到 。