Network Architecture Search (NAS) methods have recently gathered much attention. They design networks with better performance and use a much shorter search time compared to traditional manual tuning. Despite their efficiency in model deployments, most NAS algorithms target a single task on a fixed hardware system. However, real-life few-shot learning environments often cover a great number of tasks (T ) and deployments on a wide variety of hardware platforms (H ). The combinatorial search complexity T times H creates a fundamental search efficiency challenge if one naively applies existing NAS methods to these scenarios. To overcome this issue, we show, for the first time, how to rapidly adapt model architectures to new tasks in a many-task many-hardware few-shot learning setup by integrating Model Agnostic Meta Learning (MAML) into the NAS flow. The proposed NAS method (H-Meta-NAS) is hardware-aware and performs optimisation in the MAML framework. H-Meta-NAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints. In particular, on the 5-way 1-shot Mini-ImageNet classification task, the proposed method outperforms the best manual baseline by a large margin (5.21% in accuracy) using 60% less computation.
翻译:网络搜索(NAS)方法最近引起了人们的极大关注。它们设计了业绩较好的网络,使用的时间比传统的手工调整要短得多。尽管在模型部署方面效率高,但大多数NAS算法将单一任务锁定在固定硬件系统上。然而,现实生活中的少见学习环境往往涵盖大量任务(T)和在各种硬件平台(H)上部署。组合搜索复杂时间H 带来了基本的搜索效率挑战,如果有人天真地应用现有的NAS方法来应对这些情景的话。为了克服这一问题,我们第一次展示了如何通过将模型模型元体学习(MAML)纳入固定硬件系统流程,迅速将模型结构结构结构结构适应于许多任务中的新任务。拟议的NAS方法(H-Meta-NAS)具有硬件意识,并在MAML框架中进行优化。H-Meta-NAS显示PA的主导地位,相比之下,我们第一次展示了如何迅速调整模型结构结构以适应许多任务任务,许多硬件的软件,但少见的学习基准,通过将模型纳入NAS流中,以各种硬件平台和最大比例计算法式的五比值计算方法,具体为五比重。