Frozen pretrained models have become a viable alternative to the pretraining-then-finetuning paradigm for transfer learning. However, with frozen models there are relatively few parameters available for adapting to downstream tasks, which is problematic in computer vision where tasks vary significantly in input/output format and the type of information that is of value. In this paper, we present a study of frozen pretrained models when applied to diverse and representative computer vision tasks, including object detection, semantic segmentation and video action recognition. From this empirical analysis, our work answers the questions of what pretraining task fits best with this frozen setting, how to make the frozen setting more flexible to various downstream tasks, and the effect of larger model sizes. We additionally examine the upper bound of performance using a giant frozen pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches competitive performance on a varied set of major benchmarks with only one shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7 top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to bring greater attention to this promising path of freezing pretrained image models.
翻译:冷冻模型已成为替代培训前再调整转让学习模式的可行替代办法,然而,由于冻结模型,适应下游任务的现有参数相对较少,这在计算机愿景方面有问题,因为在输入/产出格式和有价值的信息类型方面,任务在输入/产出格式方面差异很大,在计算机愿景类型方面差异很大。在本文件中,我们介绍了对冻结的预培训模型的研究,这些模型应用到多样化和具有代表性的计算机愿景任务,包括物体探测、语义分解和视频动作识别。根据这一经验分析,我们的工作回答了以下问题:哪些培训前任务最适合这一冷冻环境,如何使冷冻的设置更灵活地适应各种下游任务,以及更大模型规模的影响。我们还利用一个具有30亿参数(SwinV2-G)的大型冷冻预设模型来审查业绩的上限,发现该模型在一系列不同的主要基准上达到竞争性性表现,只有一个共同的冷冻基础网络:60 000箱 mAP和52.2 面具用于CO物体检测测试-标准的防罩式MAP,57.6 Val mIU用于ADE20K mantictret seal seal modistration mogration mogradustration to lating the mact mact macts macts mal lating pal lating pre lating pre lating macts