The discovery of neural architectures from scratch is the long-standing goal of Neural Architecture Search (NAS). Searching over a wide spectrum of neural architectures can facilitate the discovery of previously unconsidered but well-performing architectures. In this work, we take a large step towards discovering neural architectures from scratch by expressing architectures algebraically. This algebraic view leads to a more general method for designing search spaces, which allows us to compactly represent search spaces that are 100s of orders of magnitude larger than common spaces from the literature. Further, we propose a Bayesian Optimization strategy to efficiently search over such huge spaces, and demonstrate empirically that both our search space design and our search strategy can be superior to existing baselines. We open source our algebraic NAS approach and provide APIs for PyTorch and TensorFlow.
翻译:从零到零发现神经结构是神经结构搜索的长期目标。 搜索一系列广泛的神经结构可以促进发现先前未考虑但表现良好的结构。 在这项工作中,我们从零到零迈出了一大步,通过表达结构代数来发现神经结构。 这种代数观点导致一种设计搜索空间的更一般的方法,使我们能够集中代表比文献中共同空间大100个数量级的搜索空间。 此外,我们提出了一种巴伊西亚优化化战略,以有效搜索如此巨大的空间,并用经验证明我们的搜索空间设计和搜索战略能够优于现有的基线。 我们打开了我们的代数NAS方法,并为PyTorch和TensorFlow提供了参考空间。