Driving trajectory representation learning is of great significance for various location-based services, such as driving pattern mining and route recommendation. However, previous representation generation approaches tend to rarely address three challenges: 1) how to represent the intricate semantic intentions of mobility inexpensively; 2) complex and weak spatial-temporal dependencies due to the sparsity and heterogeneity of the trajectory data; 3) route selection preferences and their correlation to driving behavior. In this paper, we propose a novel multimodal fusion model, DouFu, for trajectory representation joint learning, which applies multimodal learning and attention fusion module to capture the internal characteristics of trajectories. We first design movement, route, and global features generated from the trajectory data and urban functional zones and then analyze them respectively with the attention encoder or feed forward network. The attention fusion module incorporates route features with movement features to create a better spatial-temporal embedding. With the global semantic feature, DouFu produces a comprehensive embedding for each trajectory. We evaluate representations generated by our method and other baseline models on classification and clustering tasks. Empirical results show that DouFu outperforms other models in most of the learning algorithms like the linear regression and the support vector machine by more than 10%.
翻译:运行轨迹代表学习对于各种基于地点的服务,如驱动模式采矿和路线建议等具有重大意义。然而,以往的代表性生成方法往往很少解决三个挑战:(1) 如何以低廉的价格代表流动的复杂语义意图;(2) 由于轨迹数据的宽度和异质性,复杂的和脆弱的空间时空依赖性;(3) 路由选择偏好及其与驱动行为的相关性。在本文件中,我们提议了一种新型的多式联运混合模式,即道富,用于轨迹代表联合学习,该模式应用多式联运学习和注意力融合模块来捕捉轨迹和组合任务的内部特征。我们首先设计轨迹数据和城市功能区产生的运动、路线和全球特征,然后分别用注意力编码或前方网络来分析这些特征。关注聚合模块包含路线特征,以创造更好的空间-时序嵌嵌嵌入。由于全球的语系特征,杜富为每个轨迹的全面嵌入。我们评估了我们的方法和其他基线模型在分类和集群任务上产生的演示情况。我们首先设计了轨迹数据和注意力融合模块的移动、路线和全球特征,然后分别用电算结果分析分析结果,例如 DouFormamasmaxx的模型比其他模型更多地学习了10号。