Network Morphism based Neural Architecture Search (NAS) is one of the most efficient methods, however, knowing where and when to add new neurons or remove dis-functional ones is generally left to black-box Reinforcement Learning models. In this paper, we present a new Network Morphism based NAS called Noisy Heuristics NAS which uses heuristics learned from manually developing neural network models and inspired by biological neuronal dynamics. Firstly, we add new neurons randomly and prune away some to select only the best fitting neurons. Secondly, we control the number of layers in the network using the relationship of hidden units to the number of input-output connections. Our method can increase or decrease the capacity or non-linearity of models online which is specified with a few meta-parameters by the user. Our method generalizes both on toy datasets and on real-world data sets such as MNIST, CIFAR-10, and CIFAR-100. The performance is comparable to the hand-engineered architecture ResNet-18 with the similar parameters.
翻译:以网络为基础的基于神经结构搜索(NAS)是最有效的方法之一,然而,我们知道在何时何地添加新的神经元或去除功能不全的神经元通常留给黑盒强化学习模型。我们在本文件中介绍了一个新的基于网络的光谱模型NAS,称为新神经元神经元NAS,它使用人工开发神经网络模型和生物神经动态的启发所学的超常力学。首先,我们随机添加新的神经元,将某些神经元除去,只选择最合适的神经元。第二,我们利用隐藏单元与输入输出连接数之间的关系来控制网络中的层数。我们的方法可以增加或降低由用户用几个元参数指定的在线模型的能力或非线性。我们的方法在玩具数据集和诸如MNISIC、CIFAR-10和CIFAR-100等真实世界数据集上都作了概括。其性能与手动结构ResNet-18相似的参数相似。