Compared to the great progress of large-scale vision transformers (ViTs) in recent years, large-scale models based on convolutional neural networks (CNNs) are still in an early state. This work presents a new large-scale CNN-based foundation model, termed InternImage, which can obtain the gain from increasing parameters and training data like ViTs. Different from the recent CNNs that focus on large dense kernels, InternImage takes deformable convolution as the core operator, so that our model not only has the large effective receptive field required for downstream tasks such as detection and segmentation, but also has the adaptive spatial aggregation conditioned by input and task information. As a result, the proposed InternImage reduces the strict inductive bias of traditional CNNs and makes it possible to learn stronger and more robust patterns with large-scale parameters from massive data like ViTs. The effectiveness of our model is proven on challenging benchmarks including ImageNet, COCO, and ADE20K. It is worth mentioning that InternImage-H achieved a new record 65.4 mAP on COCO test-dev and 62.9 mIoU on ADE20K, outperforming current leading CNNs and ViTs. The code will be released at https://github.com/OpenGVLab/InternImage.
翻译:与近年来大型视觉变压器(ViTs)的巨大进步相比,基于 convolutional神经网络(CNNs)的大规模模型仍然处于早期阶段,这项工作提出了一个新的大型CNN基础模型,称为Internimage,它可以从ViTs等越来越多的参数和培训数据中获益。与最近侧重于大型密集核心的CNN系统不同,Internimage以核心操作器为核心操作器进行变形,因此我们的模型不仅拥有探测和分解等下游任务所需的大的有效接收场,而且还有以投入和任务信息为条件的适应性空间汇总。因此,拟议的Internimaage可以减少传统CNNs严格的感性偏向性,并有可能从大型数据(ViTs)中学习更强大和更强有力的模式。我们的模型在具有挑战性的基准上证明了效力,包括图像网络、COCOCO和ADE20K。值得一提的是,InternIMHH为当前VAB/MADAKS/MADOS 和MAVIKS AS CODRAVAD/62AD/MGSVADRSVADRSDR和62AD/MGSVAD/62ADRSV/62和MGSVADRSODR和62ADR)的新记录, 将是在新的CVADMKSO和62AD/MKSOD/MKSO和62/MKSODMKSVKSVKSVKS和62和62。</s>