In this paper, we propose an approach to neural architecture search (NAS) based on graph embeddings. NAS has been addressed previously using discrete, sampling based methods, which are computationally expensive as well as differentiable approaches, which come at lower costs but enforce stronger constraints on the search space. The proposed approach leverages advantages from both sides by building a smooth variational neural architecture embedding space in which we evaluate a structural subset of architectures at training time using the predicted performance while it allows to extrapolate from this subspace at inference time. We evaluate the proposed approach in the context of two common search spaces, the graph structure defined by the ENAS approach and the NAS-Bench-101 search space, and improve over the state of the art in both.
翻译:在本文中,我们提出了基于图形嵌入的神经结构搜索方法(NAS ) 。 之前,NAS使用离散的、基于抽样的方法来处理,这些方法在计算上费用昂贵,而且有差异,这些方法成本较低,但对搜索空间施加了更大的限制。 拟议的方法利用双方的优势,利用预测的性能对培训时的建筑结构结构分组进行评估,同时允许从这一次空间推断推论时间进行推断。 我们评估了两个共同搜索空间的拟议方法,即ENAS方法和NAS-Bench-101搜索空间界定的图形结构,并改进了两者的艺术状况。