While early research in neural architecture search (NAS) required extreme computational resources, the recent releases of tabular and surrogate benchmarks have greatly increased the speed and reproducibility of NAS research. However, two of the most popular benchmarks do not provide the full training information for each architecture. As a result, on these benchmarks it is not possible to run many types of multi-fidelity techniques, such as learning curve extrapolation, that require evaluating architectures at arbitrary epochs. In this work, we present a method using singular value decomposition and noise modeling to create surrogate benchmarks, NAS-Bench-111, NAS-Bench-311, and NAS-Bench-NLP11, that output the full training information for each architecture, rather than just the final validation accuracy. We demonstrate the power of using the full training information by introducing a learning curve extrapolation framework to modify single-fidelity algorithms, showing that it leads to improvements over popular single-fidelity algorithms which claimed to be state-of-the-art upon release. Our code and pretrained models are available at https://github.com/automl/nas-bench-x11.
翻译:虽然早期神经结构搜索研究(NAS)需要极端的计算资源,但最近发布的表格和替代基准大大提高了NAS研究的速度和可复制性,然而,最受欢迎的两个基准没有为每个结构提供全面的培训信息。因此,在这些基准上,不可能运行许多类型的多信仰技术,例如学习曲线外推法,这要求评估任意时代的结构。在这项工作中,我们提出了一个方法,使用单值分解和噪声模型来创建代谢基准,NAS-Bench-111、NAS-Bench-311和NAS-Bench-NLP11,为每个结构提供全部的培训信息,而不仅仅是最后的验证准确性。我们展示了使用全部培训信息的力量,方法是采用学习曲线外推法框架来修改单方概念的算法,表明它有助于改进在释放时声称为状态的流行的单方-纤维算法。我们的代码和预培训模型可在 https://gius/combx. 上查阅。