Large-scale vision foundation models have made significant progress in visual tasks on natural images, where the vision transformers are the primary choice for their good scalability and representation ability. However, the utilization of large models in the remote sensing (RS) community remains under-explored where existing models are still at small-scale, which limits the performance. In this paper, we resort to plain vision transformers with about 100 million parameters and make the first attempt to propose large vision models customized for RS tasks and explore how such large models perform. Specifically, to handle the large image size and objects of various orientations in RS images, we propose a new rotated varied-size window attention to substitute the original full attention in transformers, which could significantly reduce the computational cost and memory footprint while learn better object representation by extracting rich context from the generated diverse windows. Experiments on detection tasks demonstrate the superiority of our model over all state-of-the-art models, achieving 81.16% mAP on the DOTA-V1.0 dataset. The results of our models on downstream classification and segmentation tasks also demonstrate competitive performance compared with the existing advanced methods. Further experiments show the advantages of our models on computational complexity and few-shot learning.
翻译:大型视觉基础模型在自然图像的视觉任务方面取得了显著进展,在自然图像的视觉任务中,视觉变压器是其可缩放性和代表性能力的主要选择;然而,在现有模型仍然处于小规模,从而限制了性能的情况下,遥感(RS)界大型模型的利用仍然未得到充分探索;在本文中,我们使用具有约1亿参数的普通视觉变压器,并首次尝试为塞族共和国的任务提出大型视觉变压器,并探索这些大模型的功能。具体地说,为了处理塞族共和国图像中各种方向的巨大图像大小和对象,我们建议采用新的旋转式不同尺寸窗口关注,以取代变压器中原先的全部关注,这可以大大减少计算成本和记忆足迹,同时通过从生成的不同窗口中提取丰富的环境来学习更好的对象代表。关于探测任务的实验表明我们的模型优于所有最先进的模型,在DATA-V1.0数据集上实现了81.16%的 mAP。我们关于下游分类和分解的模型的结果也显示了与现有先进方法相比具有竞争力的绩效。进一步实验显示了我们模型在计算复杂性方面的优势。