In this paper, we present a general and effective framework for Neural Architecture Search (NAS), named PredNAS. The motivation is that given a differentiable performance estimation function, we can directly optimize the architecture towards higher performance by simple gradient ascent. Specifically, we adopt a neural predictor as the performance predictor. Surprisingly, PredNAS can achieve state-of-the-art performances on NAS benchmarks with only a few training samples (less than 100). To validate the universality of our method, we also apply our method on large-scale tasks and compare our method with RegNet on ImageNet and YOLOX on MSCOCO. The results demonstrate that our PredNAS can explore novel architectures with competitive performances under specific computational complexity constraints.
翻译:在本文中,我们提出了一个名为PredNAS(PredNAS)的神经结构搜索(NAS)总体有效框架。 其动机是,考虑到不同的性能估计功能,我们可以直接优化建筑结构,通过简单的梯度提升来提高性能。 具体地说,我们采用神经预测器作为性能预测器。 令人惊讶的是,PredNAS(PredNAS)只能用几个培训样本(不到100个)来达到NAS基准的最新性能。 为了验证我们方法的普遍性,我们还应用了我们的方法,将我们的方法应用于大规模的任务,并将我们的方法与图像网络的RegNet和莫斯科公司(MCCO)的YOLOX进行比较。 结果表明,我们的PredNAS可以探索在特定的计算复杂性限制下具有竞争力的新型结构。