Recent breakthroughs of Neural Architecture Search (NAS) extend the field's research scope towards a broader range of vision tasks and more diversified search spaces. While existing NAS methods mostly design architectures on a single task, algorithms that look beyond single-task search are surging to pursue a more efficient and universal solution across various tasks. Many of them leverage transfer learning and seek to preserve, reuse, and refine network design knowledge to achieve higher efficiency in future tasks. However, the enormous computational cost and experiment complexity of cross-task NAS are imposing barriers for valuable research in this direction. Existing NAS benchmarks all focus on one type of vision task, i.e., classification. In this work, we propose TransNAS-Bench-101, a benchmark dataset containing network performance across seven tasks, covering classification, regression, pixel-level prediction, and self-supervised tasks. This diversity provides opportunities to transfer NAS methods among tasks and allows for more complex transfer schemes to evolve. We explore two fundamentally different types of search space: cell-level search space and macro-level search space. With 7,352 backbones evaluated on seven tasks, 51,464 trained models with detailed training information are provided. With TransNAS-Bench-101, we hope to encourage the advent of exceptional NAS algorithms that raise cross-task search efficiency and generalizability to the next level. Our dataset file will be available at Mindspore, VEGA.
翻译:近期神经结构搜索(NAS)的突破将实地研究的范围扩大到更广泛的视野任务和更加多样化的搜索空间;虽然现有NAS方法主要设计单一任务的架构,但超越单一任务搜索的算法正在涌现,以寻求一种更高效和普遍的办法解决各种任务,其中许多利用了转让学习,并寻求保存、再利用和完善网络设计知识,以便在未来任务中提高效率;然而,跨任务NAS的巨大计算成本和试验复杂性为朝这个方向进行有价值的研究设置了障碍。现有的NAS基准都侧重于一种类型的愿景任务,即分类。在这个工作中,我们提议TranNAS-Bench-101,一个基准数据集,包含七个任务的网络业绩,涵盖分类、回归、像素级预测和自我监督的任务。这种多样性为将NAS的方法在任务中转移,并允许更复杂的转移计划演变。我们探索两种基本不同的搜索空间:细胞级搜索空间和宏观搜索空间。我们用7,351,101个主干线,我们用7个任务,51个基准,我们训练了S的搜索模型,我们用7个VEG-A系统搜索模型,我们用来进行特殊的搜索。