Many historical map sheets are publicly available for studies that require long-term historical geographic data. The cartographic design of these maps includes a combination of map symbols and text labels. Automatically reading text labels from map images could greatly speed up the map interpretation and helps generate rich metadata describing the map content. Many text detection algorithms have been proposed to locate text regions in map images automatically, but most of the algorithms are trained on out-ofdomain datasets (e.g., scenic images). Training data determines the quality of machine learning models, and manually annotating text regions in map images is labor-extensive and time-consuming. On the other hand, existing geographic data sources, such as Open- StreetMap (OSM), contain machine-readable map layers, which allow us to separate out the text layer and obtain text label annotations easily. However, the cartographic styles between OSM map tiles and historical maps are significantly different. This paper proposes a method to automatically generate an unlimited amount of annotated historical map images for training text detection models. We use a style transfer model to convert contemporary map images into historical style and place text labels upon them. We show that the state-of-the-art text detection models (e.g., PSENet) can benefit from the synthetic historical maps and achieve significant improvement for historical map text detection.
翻译:许多历史地图表可供公众查阅,用于进行需要长期历史地理数据的研究。这些地图的制图设计包括地图符号和文本标签的组合。自动阅读地图图象的文本标签可以大大加快地图解释,有助于生成丰富的地图内容元数据。许多文本检测算法已经提出自动将文本区域定位在地图图象中,但大多数算法都是在外部数据组(例如,景色图像)上培训的。培训数据决定了机器学习模型的质量,地图图象中手动注解的文本区域是劳动延伸和耗时的。另一方面,现有的地理数据源,如Open-StreetMap(OSM),含有机器可读的地图层,使我们能够将文本层分开,容易获得文本标签说明。然而,OSM地图图图和历史地图图之间的制图风格大不相同。我们用一个样式转换模型将当代地图图像转换为历史风格和地点的图像探测模型,我们可以用这些模型来显示历史图象的显著的改进。我们可以用这个模型来显示历史图象的改进。