Large-scale vision foundation models have made significant progress in visual tasks on natural images, where the vision transformers are the primary choice for their good scalability and representation ability. However, the utilization of large models in the remote sensing (RS) community remains under-explored where existing models are still at small-scale, which limits the performance. In this paper, we resort to plain vision transformers with about 100 million parameters and make the first attempt to propose large vision models customized for RS tasks and explore how such large models perform. Specifically, to handle the large image size and objects of various orientations in RS images, we propose a new rotated varied-size window attention to substitute the original full attention in transformers, which could significantly reduce the computational cost and memory footprint while learn better object representation by extracting rich context from the generated diverse windows. Experiments on detection tasks demonstrate the superiority of our model over all state-of-the-art models, achieving 81.16\% mAP on the DOTA-V1.0 dataset. The results of our models on downstream classification and segmentation tasks also demonstrate competitive performance compared with the existing advanced methods. Further experiments show the advantages of our models on computational complexity and few-shot learning. The code and models will be released at https://github.com/ViTAE-Transformer/Remote-Sensing-RVSA
翻译:大型视觉基础模型在自然图像的视觉任务方面取得了显著进展,在自然图像的视觉任务中,视觉变压器是其可扩缩性和代表性能力的主要选择;然而,在现有模型仍然处于小规模,从而限制了性能的情况下,遥感(RS)界大型模型的利用仍然未得到充分探索,现有模型仍然处于小规模,限制了性能;在本文件中,我们利用具有约1亿个参数的普通视觉变压器,首次尝试为塞族共和国的任务提出大型的视觉变压器,并探索这些大模型的功能。具体地说,为了处理塞族共和国图像中各种方向的巨大图像大小和对象,我们建议以新的旋转式不同规模窗口关注取代变压器最初的充分关注,这可以大大减少计算成本和记忆足迹,同时通过从生成的不同窗口中提取丰富的环境来学习更好的对象代表。关于探测任务的实验表明我们的模型优于所有最先进的模型,在DATA-V1.0数据集中实现了81.16 ⁇ mAP。我们的下游分类和分解任务模型的结果也显示了与现有先进模型相比的竞争性表现。进一步实验将显示我们模型/REDR-SAREST-SAREDRismax