Earth observation data presents a unique challenge: it is spatial like images, sequential like video or text, and highly multimodal. We present OlmoEarth: a multimodal, spatio-temporal foundation model that employs a novel self-supervised learning formulation, masking strategy, and loss all designed for the Earth observation domain. OlmoEarth achieves state-of-the-art performance compared to 12 other foundation models across a variety of research benchmarks and real-world tasks from external partners. When evaluating embeddings OlmoEarth achieves the best performance on 15 out of 24 tasks, and with full fine-tuning it is the best on 19 of 29 tasks. We deploy OlmoEarth as the backbone of an end-to-end platform for data collection, labeling, training, and inference of Earth observation models. The OlmoEarth Platform puts frontier foundation models and powerful data management tools into the hands of non-profits and NGOs working to solve the world's biggest problems. OlmoEarth source code, training data, and pre-trained weights are available at $\href{https://github.com/allenai/olmoearth_pretrain}{\text{https://github.com/allenai/olmoearth_pretrain}}$.
翻译:地球观测数据呈现出独特的挑战:它既具有图像的空间特性,又具备视频或文本的序列特性,并且是高度多模态的。我们提出了OlmoEarth:一种多模态时空基础模型,采用专为地球观测领域设计的新型自监督学习框架、掩码策略和损失函数。与12种其他基础模型相比,OlmoEarth在多种研究基准和来自外部合作伙伴的实际任务中均实现了最先进的性能。在嵌入评估中,OlmoEarth在24项任务中的15项上取得最佳性能;经过完整微调后,在29项任务中的19项上表现最优。我们将OlmoEarth部署为一个端到端平台的核心架构,该平台支持地球观测模型的数据收集、标注、训练和推理。OlmoEarth平台将前沿基础模型和强大的数据管理工具交予致力于解决全球重大问题的非营利组织和非政府组织。OlmoEarth的源代码、训练数据和预训练权重可通过以下链接获取:https://github.com/allenai/olmoearth_pretrain。