We propose a new scalable method to optimize the architecture of an artificial neural network. The proposed algorithm, called Greedy Search for Neural Network Architecture, aims to determine a neural network with minimal number of layers that is at least as performant as neural networks of the same structure identified by other hyperparameter search algorithms in terms of accuracy and computational cost. Numerical results performed on benchmark datasets show that, for these datasets, our method outperforms state-of-the-art hyperparameter optimization algorithms in terms of attainable predictive performance by the selected neural network architecture, and time-to-solution for the hyperparameter optimization to complete.
翻译:我们提出了一个新的可扩展方法来优化人工神经网络的架构。 拟议的算法叫做“贪婪搜索神经网络架构 ”, 旨在确定一个至少有最小数量层的神经网络,至少与其它超参数搜索算法在准确性和计算成本方面确定的相同结构的神经网络一样。 基准数据集的数值结果显示,对于这些数据集来说,我们的方法在所选神经网络架构可实现的预测性能方面超过了最先进的超光谱优化算法,而且超光谱优化需要时间来完成。