Predictor-based Neural Architecture Search (NAS) employs an architecture performance predictor to improve the sample efficiency. However, predictor-based NAS suffers from the severe ``cold-start'' problem, since a large amount of architecture-performance data is required to get a working predictor. In this paper, we focus on exploiting information in cheaper-to-obtain performance estimations (i.e., low-fidelity information) to mitigate the large data requirements of predictor training. Despite the intuitiveness of this idea, we observe that using inappropriate low-fidelity information even damages the prediction ability and different search spaces have different preferences for low-fidelity information types. To solve the problem and better fuse beneficial information provided by different types of low-fidelity information, we propose a novel dynamic ensemble predictor framework that comprises two steps. In the first step, we train different sub-predictors on different types of available low-fidelity information to extract beneficial knowledge as low-fidelity experts. In the second step, we learn a gating network to dynamically output a set of weighting coefficients conditioned on each input neural architecture, which will be used to combine the predictions of different low-fidelity experts in a weighted sum. The overall predictor is optimized on a small set of actual architecture-performance data to fuse the knowledge from different low-fidelity experts to make the final prediction. We conduct extensive experiments across five search spaces with different architecture encoders under various experimental settings. Our method can easily be incorporated into existing predictor-based NAS frameworks to discover better architectures.
翻译:以预测为基础的神经建筑搜索(NAS) 使用一个建筑性能预测器来提高样本效率。然而,预测性低性能信息(NAS) 也存在严重的“冷藏启动”问题,因为需要大量建筑性能数据才能获得一个工作预测器。在本文中,我们侧重于利用以更便宜到更隐蔽的性能估计(即低纤维信息)为基础的信息来减轻预测性能培训的大量数据要求。尽管这种想法不直观,但我们观察到,使用不适当的低性能信息甚至损害预测能力和不同的搜索空间,对于低性能信息类型有着不同的偏好。为了解决问题和获得由不同类型低性能信息提供的更好的机能性能信息,我们提出了一个新的动态联合性全套性全套性能预测框架,包括两个步骤。在第一步,我们培训不同类型基于低性能的低性能信息的不同子预测性能,以便作为低性能专家获取有益的知识。在第二步,我们学习如何将网络与动态性能最终预测性能结构相结合, 将使用一种不同性能的精细度的精度预测结构组合,将使用不同的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细。