Neural forecasting of spatiotemporal time series drives both research and industrial innovation in several relevant application domains. Graph neural networks (GNNs) are often the core component of the forecasting architecture. However, in most spatiotemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph, hence hindering the application of these models to large graphs and long temporal sequences. While methods to improve scalability have been proposed in the context of static graphs, few research efforts have been devoted to the spatiotemporal case. To fill this gap, we propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics. In particular, we use a randomized recurrent neural network to embed the history of the input time series into high-dimensional state representations encompassing multi-scale temporal dynamics. Such representations are then propagated along the spatial dimension using different powers of the graph adjacency matrix to generate node embeddings characterized by a rich pool of spatiotemporal features. The resulting node embeddings can be efficiently pre-computed in an unsupervised manner, before being fed to a feed-forward decoder that learns to map the multi-scale spatiotemporal representations to predictions. The training procedure can then be parallelized node-wise by sampling the node embeddings without breaking any dependency, thus enabling scalability to large networks. Empirical results on relevant datasets show that our approach achieves results competitive with the state of the art, while dramatically reducing the computational burden.
翻译:在多个相关应用领域的研究和工业创新中,对空间时序的内分层预测是研究和工业创新的动力。图形神经网络(GNNS)往往是预测结构的核心组成部分。然而,在大多数空间时序 GNNS中,计算复杂度的尺度最高为一个二次因素,其序列序列的长度为图中链接的长度,从而妨碍将这些模型应用于大图表和长时序。虽然在静态图表中提出了改进可缩放性的方法,但几乎没有专门研究波偏移性案例。为了填补这一空白,我们提出了一个可缩放性的结构,利用了时间和空间动态的有效编码。特别是,我们使用一个随机的经常性神经网络将输入时间序列的历史嵌入包含多尺度时间动态的高度状态表达。然后,这些表达方式在空间维度上传播,利用图形对相近矩阵的不同功能生成任何偏移性嵌入的节点。因此,在不易缩缩放性结果堆积之前,可以将快速的内嵌化的内嵌成一个快速的图像。因此,在快速的内嵌入之前,可以学习一个快速的预置式的图像表达程序。