The recent progress in neural architecture search (NAS) has allowed scaling the automated design of neural architectures to real-world domains, such as object detection and semantic segmentation. However, one prerequisite for the application of NAS are large amounts of labeled data and compute resources. This renders its application challenging in few-shot learning scenarios, where many related tasks need to be learned, each with limited amounts of data and compute time. Thus, few-shot learning is typically done with a fixed neural architecture. To improve upon this, we propose MetaNAS, the first method which fully integrates NAS with gradient-based meta-learning. MetaNAS optimizes a meta-architecture along with the meta-weights during meta-training. During meta-testing, architectures can be adapted to a novel task with a few steps of the task optimizer, that is: task adaptation becomes computationally cheap and requires only little data per task. Moreover, MetaNAS is agnostic in that it can be used with arbitrary model-agnostic meta-learning algorithms and arbitrary gradient-based NAS methods. %We present encouraging results for MetaNAS with a combination of DARTS and REPTILE on few-shot classification benchmarks. Empirical results on standard few-shot classification benchmarks show that MetaNAS with a combination of DARTS and REPTILE yields state-of-the-art results.
翻译:神经结构搜索(NAS)最近的进展使得神经结构搜索(NAS)能够将神经结构的自动化设计推广到现实世界域,例如物体探测和语义分割。然而,应用NAS的一个先决条件是大量标签数据和计算资源。这使得应用NAS的一个先决条件是大量标签数据和计算资源。这使得其应用在几个截图的学习情景中具有挑战性,在这些情景中,需要学习许多相关的任务,每个任务的数据和计算时间都有限。因此,很少的学习通常是用固定的神经结构完成的。为了改进这一点,我们提议MetNAS,这是将NAS充分结合基于梯度的元学习的第一个方法。MetNAS在元培训中,将一个元结构与元加权一起优化。在元测试期间,可以将其应用到任务优化的几步任务的新任务,即任务调整变得成本低,每个任务只需要很少的数据。此外,MetNAS具有不可知性,因为它可以用于任意的模型化元学习算法和基于梯度的NASART-ART-C-ART-MAST 方法的任意梯度梯度梯级组合。在MS IMS IMS 上,将一个标准结果显示结果,在标准分类基准中,在标准S-S IMBS IMARBS IMS IMS IMBS IMBS IM IMBS IMB IM 上显示结果结果显示,以 IMB IMS IMB IM IM IM 。