Common Neural Architecture Search methods generate large amounts of candidate architectures that need training in order to assess their performance and find an optimal architecture. To minimize the search time we use different performance estimation strategies. The effectiveness of such strategies varies in terms of accuracy and fit and query time. This study proposes a new method, EmProx Score (Embedding Proximity Score). Similar to Neural Architecture Optimization (NAO), this method maps candidate architectures to a continuous embedding space using an encoder-decoder framework. The performance of candidates is then estimated using weighted kNN based on the embedding vectors of architectures of which the performance is known. Performance estimations of this method are comparable to the MLP performance predictor used in NAO in terms of accuracy, while being nearly nine times faster to train compared to NAO. Benchmarking against other performance estimation strategies currently used shows similar to better accuracy, while being five up to eighty times faster.
翻译:常见神经架构搜索方法产生大量需要培训的候选架构,以便评估其业绩并找到最佳架构。 为了最大限度地减少搜索时间, 我们使用不同的绩效估计战略。 这些战略的效果在准确性、适切性和查询时间方面各不相同。 本研究提出了一种新的方法, Emprox 评分(模拟近似评分 ) 。 与神经架构优化(NAO ) 类似, 这种方法利用编码解码器- 解码框架将候选架构绘制成一个持续嵌入的空间。 然后, 依据已知性能所在结构的嵌入矢量,使用加权 kNN 来估算候选人的性能。 这种方法的性能估计与NAO 的 MLP 性能预测值相似, 与NAO 相比, 培训速度快近9倍。 参照目前使用的其他性能估算战略,比其他性能估计战略的精确性能要快近5至80倍。