The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be alleviated if we could partially predict a network's trained accuracy from its initial state. In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU, and verify its effectiveness on NAS-Bench-101, NAS-Bench-201, NATS-Bench, and Network Design Spaces. Our approach can be readily combined with more expensive search methods; we examine a simple adaptation of regularised evolutionary search. Code for reproducing our experiments is available at https://github.com/BayesWatch/nas-without-training.
翻译:手设计深神经网络所需的时间和努力是巨大的。 这促使发展神经结构搜索技术,使这种设计自动化。 但是,NAS算法往往缓慢而昂贵;它们需要培训大量候选网络,以便为搜索进程提供信息。如果我们能从最初状态部分预测一个网络经过训练的准确性,这就可以减轻。在这项工作中,我们研究未经训练的网络中数据点的激活重叠,并激励如何能提供一种能有益地显示网络经过训练的绩效的措施。我们将这一措施纳入一个简单的算法,使我们能够在不经过任何训练的情况下在单一的GPU上搜索强大的网络,并在没有训练的情况下核查其在NAS-Bench-101、NAS-Bench-201、NATS-Bench-Bench和网络设计空间的有效性。我们的方法可以很容易地与更昂贵的搜索方法结合起来;我们研究对正规化的进化搜索进行简单的调整。我们实验的再生代码可以在https://github.com/Bayesurvear/nas-untraintraintrain.