Spatio-temporal modeling as a canonical task of multivariate time series forecasting has been a significant research topic in AI community. To address the underlying heterogeneity and non-stationarity implied in the graph streams, in this study, we propose Spatio-Temporal Meta-Graph Learning as a novel Graph Structure Learning mechanism on spatio-temporal data. Specifically, we implement this idea into Meta-Graph Convolutional Recurrent Network (MegaCRN) by plugging the Meta-Graph Learner powered by a Meta-Node Bank into GCRN encoder-decoder. We conduct a comprehensive evaluation on two benchmark datasets (METR-LA and PEMS-BAY) and a large-scale spatio-temporal dataset that contains a variaty of non-stationary phenomena. Our model outperformed the state-of-the-arts to a large degree on all three datasets (over 27% MAE and 34% RMSE). Besides, through a series of qualitative evaluations, we demonstrate that our model can explicitly disentangle locations and time slots with different patterns and be robustly adaptive to different anomalous situations. Codes and datasets are available at https://github.com/deepkashiwa20/MegaCRN.
翻译:随着多变量时间序列预测的基本任务,时空建模一直是AI社区的重要研究课题。为了解决图流中暗示的异质性和非平稳性,本研究提出了时空元图学习作为一种新颖的时空数据图结构学习机制。具体地,我们将这个想法实现到由元结点库支持的元图学习器插入到GCRN编码器-解码器中的Meta-Graph卷积循环网络(MegaCRN)。我们在两个基准数据集(METR-LA和PEMS-BAY)和一个包含各种非平稳现象的大规模时空数据集上进行了全面评估。我们的模型在所有三个数据集上的表现都大大优于最先进技术(超过27%的MAE和34%的RMSE)。此外,通过一系列定性评估,我们证明了我们的模型可以明确地解开具有不同模式的位置和时间间隙,并且可以鲁棒地适应不同的异常情况。代码和数据集可在https://github.com/deepkashiwa20/MegaCRN处获得。