In this paper, we propose an approach to neural architecture search (NAS) based on graph embeddings. NAS has been addressed previously using discrete, sampling based methods, which are computationally expensive as well as differentiable approaches, which come at lower costs but enforce stronger constraints on the search space. The proposed approach leverages advantages from both sides by building a smooth variational neural architecture embedding space in which we evaluate a structural subset of architectures at training time using the predicted performance while it allows to extrapolate from this subspace at inference time. We evaluate the proposed approach in the context of two common search spaces, the graph structure defined by the ENAS approach and the NAS-Bench-101 search space, and improve over the state of the art in both. We provide our implementation at \url{https://github.com/automl/SVGe}.
翻译:在本文中,我们提出了一种基于图形嵌入的神经结构搜索(NAS)方法。以前,用离散的、基于抽样的方法来处理NAS,这些方法在计算上费用昂贵,而且有差异,这些方法成本较低,但对搜索空间施加了更大的限制。拟议方法利用双方的优势,利用预测的性能对培训时的建筑结构结构子集进行评估,同时允许从这一次空间推断推论时间进行推断。我们从两个共同搜索空间的角度评估了拟议方法,即ENAS方法和NAS-Bench-101搜索空间界定的图形结构,并改进了这两个空间的艺术状况。我们在以下网站提供了我们的执行情况:<url{https://github.com/autooml/SVGe}。