Most existing neural architecture search (NAS) algorithms are dedicated to the downstream tasks, e.g., image classification in computer vision. However, extensive experiments have shown that, prominent neural architectures, such as ResNet in computer vision and LSTM in natural language processing, are generally good at extracting patterns from the input data and perform well on different downstream tasks. These observations inspire us to ask: Is it necessary to use the performance of specific downstream tasks to evaluate and search for good neural architectures? Can we perform NAS effectively and efficiently while being agnostic to the downstream task? In this work, we attempt to affirmatively answer the above two questions and improve the state-of-the-art NAS solution by proposing a novel and generic NAS framework, termed Generic NAS (GenNAS). GenNAS does not use task-specific labels but instead adopts \textit{regression} on a set of manually designed synthetic signal bases for architecture evaluation. Such a self-supervised regression task can effectively evaluate the intrinsic power of an architecture to capture and transform the input signal patterns, and allow more sufficient usage of training samples. We then propose an automatic task search to optimize the combination of synthetic signals using limited downstream-task-specific labels, further improving the performance of GenNAS. We also thoroughly evaluate GenNAS's generality and end-to-end NAS performance on all search spaces, which outperforms almost all existing works with significant speedup.
翻译:大多数现有的神经结构搜索算法(NAS)都致力于下游任务,例如计算机视觉中的图像分类。然而,广泛的实验表明,突出的神经结构,例如计算机视觉中的ResNet和自然语言处理中的LSTM,一般都擅长从输入数据中提取模式,并很好地执行不同的下游任务。这些观测促使我们问:是否有必要利用具体的下游任务的执行情况来评价和搜索良好的神经结构?我们能否在对下游任务具有知觉性的同时切实有效地执行NAS?在这项工作中,我们试图肯定地回答以上两个问题,并通过提出新的和通用的NAS框架(称为通用NAS(GenNAS))来改进最新的NAS解决方案。GenNAS不使用特定任务标签,而是在一套手工设计的合成结构评估合成信号基础上采用\textit{region} 。这种自我监督的回归任务能够有效地评估一个建筑的内在力量,以捕获和改变输入信号模式,并改进最先进的NAS 解决方案,并允许更充分地使用所有最新的合成培训样本。我们随后提出的一个有限的搜索任务。