Neural architectures can be naturally viewed as computational graphs. Motivated by this perspective, we, in this paper, study neural architecture search (NAS) through the lens of learning random graph models. In contrast to existing NAS methods which largely focus on searching for a single best architecture, i.e, point estimation, we propose GraphPNAS a deep graph generative model that learns a distribution of well-performing architectures. Relying on graph neural networks (GNNs), our GraphPNAS can better capture topologies of good neural architectures and relations between operators therein. Moreover, our graph generator leads to a learnable probabilistic search method that is more flexible and efficient than the commonly used RNN generator and random search methods. Finally, we learn our generator via an efficient reinforcement learning formulation for NAS. To assess the effectiveness of our GraphPNAS, we conduct extensive experiments on three search spaces, including the challenging RandWire on TinyImageNet, ENAS on CIFAR10, and NAS-Bench-101/201. The complexity of RandWire is significantly larger than other search spaces in the literature. We show that our proposed graph generator consistently outperforms RNN-based one and achieves better or comparable performances than state-of-the-art NAS methods.
翻译:自然地可以将神经结构视为计算图。 我们从这个角度出发,在本文中通过学习随机图形模型的镜头,研究神经结构搜索(NAS ) 。 与现有的NAS方法相比,我们提出的GapPNAS 是一个深度图形化的变形模型,可以学习如何分布良好的结构。 依靠图形神经网络(GNNS),我们的GreaphPNAS可以更好地捕捉好神经结构的表层和其中的操作者之间的关系。 此外,我们的图形生成器导致一种可学习的概率搜索方法,比常用的 RNN 生成器和随机搜索方法更加灵活和高效。 最后,我们通过为NAS 高效的强化学习配置来学习我们的生成器。 为了评估我们的GapPNAS 的有效性,我们在三个搜索空间进行了广泛的实验, 包括Tiniyimage Net 上的挑战型Rand Wirre, CIFAR10 和 NAS- Bench- 101/201。 此外,我们的图形生成器的复杂性大大大于常用的搜索空间,或者在S- NANS 的图像中,我们提出的一个比较的状态的状态中,我们展示了我们所建的状态的状态。