Most existing neural architecture search (NAS) algorithms are dedicated to and evaluated by the downstream tasks, e.g., image classification in computer vision. However, extensive experiments have shown that, prominent neural architectures, such as ResNet in computer vision and LSTM in natural language processing, are generally good at extracting patterns from the input data and perform well on different downstream tasks. In this paper, we attempt to answer two fundamental questions related to NAS. (1) Is it necessary to use the performance of specific downstream tasks to evaluate and search for good neural architectures? (2) Can we perform NAS effectively and efficiently while being agnostic to the downstream tasks? To answer these questions, we propose a novel and generic NAS framework, termed Generic NAS (GenNAS). GenNAS does not use task-specific labels but instead adopts regression on a set of manually designed synthetic signal bases for architecture evaluation. Such a self-supervised regression task can effectively evaluate the intrinsic power of an architecture to capture and transform the input signal patterns, and allow more sufficient usage of training samples. Extensive experiments across 13 CNN search spaces and one NLP space demonstrate the remarkable efficiency of GenNAS using regression, in terms of both evaluating the neural architectures (quantified by the ranking correlation Spearman's rho between the approximated performances and the downstream task performances) and the convergence speed for training (within a few seconds).
翻译:现有大多数神经结构搜索(NAS)算法都专门用于下游任务,并由下游任务来评估,例如计算机视觉中的图像分类;然而,广泛的实验表明,突出的神经结构,如计算机视觉中的ResNet和自然语言处理中的LSTM等,一般都擅长从输入数据中提取模式,在不同的下游任务中表现良好;在本文件中,我们试图回答与NAS有关的两个基本问题。 (1) 是否有必要利用具体的下游任务的业绩来评价和搜索良好的神经结构? (2) 我们能否在对下游任务具有认知性的同时,切实有效地执行NAS? 为了回答这些问题,我们提出了一个新型和通用的NAS框架,称为通用NAS(GenNAS) 。 GenNAS不使用特定任务标签,而是采用人工设计的合成信号基础的回归。这种自我控制的回归任务能够有效地评价一个架构的内在力量,以捕捉和改造输入信号模式,并允许更充分地使用培训样本。 为了在13个CNIS搜索空间空间搜索空间和S下游系统水平结构之间进行广泛的实验,用精细的进度分析,用NLP空间结构的成绩展示显著的效率。