Fine-tuning from pre-trained ImageNet models has been a simple, effective, and popular approach for various computer vision tasks. The common practice of fine-tuning is to adopt a default hyperparameter setting with a fixed pre-trained model, while both of them are not optimized for specific tasks and time constraints. Moreover, in cloud computing or GPU clusters where the tasks arrive sequentially in a stream, faster online fine-tuning is a more desired and realistic strategy for saving money, energy consumption, and CO2 emission. In this paper, we propose a joint Neural Architecture Search and Online Adaption framework named NASOA towards a faster task-oriented fine-tuning upon the request of users. Specifically, NASOA first adopts an offline NAS to identify a group of training-efficient networks to form a pretrained model zoo. We propose a novel joint block and macro-level search space to enable a flexible and efficient search. Then, by estimating fine-tuning performance via an adaptive model by accumulating experience from the past tasks, an online schedule generator is proposed to pick up the most suitable model and generate a personalized training regime with respect to each desired task in a one-shot fashion. The resulting model zoo is more training efficient than SOTA models, e.g. 6x faster than RegNetY-16GF, and 1.7x faster than EfficientNetB3. Experiments on multiple datasets also show that NASOA achieves much better fine-tuning results, i.e. improving around 2.1% accuracy than the best performance in RegNet series under various constraints and tasks; 40x faster compared to the BOHB.
翻译:从经过培训的图像网络模型中进行微调,对于各种计算机视觉任务来说,是一种简单、有效和受欢迎的方法。微调的常见做法是采用一个带有固定的预培训模型的默认超参数设置,而这两种模型在特定任务和时间限制方面都不是优化的。此外,在云计算或 GPU 集群中,任务在一条流中相继到达,更快捷的在线微调是一种更理想和现实的节省资金、能源消耗和二氧化碳排放的战略。在本文中,我们建议采用名为NASOA的神经结构搜索和在线适应联合框架,以便应用户的要求,更快地进行面向任务的微调。具体地说,NASOA首先采用离线的超常超常超常超常超常的超常超常超常超常超常超常超常的超常参数设置。我们提出一个新的联合区块和宏观搜索空间,以便进行灵活和高效的搜索。然后,通过一个适应性能模型来估算微调的绩效。我们建议在线时间表生成一个最合适的模型,以收集最合适的模型,并产生个人化的培训系统化系统化的系统系统,比SOB系统在1比SOB快速的多级标准的系统3中,从而显示比SB的快速的SBAAA的进度。