We demonstrate how language can improve geolocation: the task of predicting the location where an image was taken. Here we study explicit knowledge from human-written guidebooks that describe the salient and class-discriminative visual features humans use for geolocation. We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game. Our approach predicts a country for each image by attending over the clues automatically extracted from the guidebook. Supervising attention with country-level pseudo labels achieves the best performance. Our approach substantially outperforms a state-of-the-art image-only geolocation method, with an improvement of over 5% in Top-1 accuracy. Our dataset and code can be found at https://github.com/g-luo/geolocation_via_guidebook_grounding.
翻译:我们展示语言如何改善地理定位: 预测图像拍摄地点的任务。 我们在这里研究人类编写的指南中的清晰知识, 描述地理定位中突出和分级的视觉特征。 我们提议通过指南定位进行地理定位, 使用来自不同地点的StreetView图像数据集, 以及GeoGuessr的相联文本指南, 一种流行的互动式地理定位游戏。 我们的方法通过关注自动从指南中提取的线索来预测每个图像的国家。 以国家级假标签监督关注达到最佳性能。 我们的方法大大超越了仅使用图像定位的状态方法, 在顶层一精确度上改进了5%以上。 我们的数据集和代码可以在 https://github.com/g-luo/geoloation_via_guidebb_groundbook_grounding 上找到 。