Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data for scale-dependent domains, such as remote sensing. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a $2.4 - 5.6\%$ non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a $0.9$ mIoU to $1.7$ mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales.
翻译:大型预训练模型通常使用强烈的数据增强来微调,以模拟不同的条件和尺度,得到的模型用于处理一系列不同空间尺度的图像任务。然而,在依赖尺度的领域(如遥感),这种模型会忽略数据中的尺度特异性信息。在本文中,我们提出了一种预训练方法 Scale-MAE,它在预训练过程中明确地学习了已知尺度下不同数据之间的关系。Scale-MAE通过遮蔽已知输入尺度下的输入图像来预训练网络,该图像涵盖的地球区域决定了 ViT 位置编码的尺度,而不是图像分辨率。Scale-MAE使用标准的 ViT 骨干网络对遮蔽的图像进行编码,然后通过带通滤波器对遮蔽的图像进行解码,重建低/高频率的低/高尺度图像。我们发现,要求网络重建低/高频率的图像可以得到用于遥感影像的稳健多尺度表示。与当前最先进的模型相比,Scale-MAE 在八个遥感数据集上平均实现了 $2.4-5.6\%$ 的非参数 kNN 分类改进,并在 SpaceNet 建筑物分割传递任务中,在多个评估尺度上实现了 $0.9$ mIoU 到 $1.7$ mIoU 的提高。