Geospatial Location Embedding (GLE) helps a Large Language Model (LLM) assimilate and analyze spatial data. GLE emergence in Geospatial Artificial Intelligence (GeoAI) is precipitated by the need for deeper geospatial awareness in our complex contemporary spaces and the success of LLMs in extracting deep meaning in Generative AI. We searched Google Scholar, Science Direct, and arXiv for papers on geospatial location embedding and LLM and reviewed articles focused on gaining deeper spatial "knowing" through LLMs. We screened 304 titles, 30 abstracts, and 18 full-text papers that reveal four GLE themes - Entity Location Embedding (ELE), Document Location Embedding (DLE), Sequence Location Embedding (SLE), and Token Location Embedding (TLE). Synthesis is tabular and narrative, including a dialogic conversation between "Space" and "LLM." Though GLEs aid spatial understanding by superimposing spatial data, they emphasize the need to advance in the intricacies of spatial modalities and generalized reasoning. GLEs signal the need for a Spatial Foundation/Language Model (SLM) that embeds spatial knowing within the model architecture. The SLM framework advances Spatial Artificial Intelligence Systems (SPAIS), establishing a Spatial Vector Space (SVS) that maps to physical space. The resulting spatially imbued Language Model is unique. It simultaneously represents actual space and an AI-capable space, paving the way for AI native geo storage, analysis, and multi-modality as the basis for Spatial Artificial Intelligence Systems (SPAIS).
翻译:暂无翻译