Multivariate Time Series Forecasting focuses on the prediction of future values based on historical context. State-of-the-art sequence-to-sequence models rely on neural attention between timesteps, which allows for temporal learning but fails to consider distinct spatial relationships between variables. In contrast, methods based on graph neural networks explicitly model variable relationships. However, these methods often rely on predefined graphs and perform separate spatial and temporal updates without establishing direct connections between each variable at every timestep. This paper addresses these problems by translating multivariate forecasting into a spatiotemporal sequence formulation where each Transformer input token represents the value of a single variable at a given time. Long-Range Transformers can then learn interactions between space, time, and value information jointly along this extended sequence. Our method, which we call Spacetimeformer, achieves competitive results on benchmarks from traffic forecasting to electricity demand and weather prediction while learning fully-connected spatiotemporal relationships purely from data.
翻译:多变量时间序列预测侧重于根据历史背景预测未来值。 最新技术的序列到序列模型依赖时间步骤之间的神经注意, 允许时间学习, 但没有考虑到变量之间的不同空间关系。 相反, 基于图形神经网络的方法明显地模型变量关系。 然而, 这些方法往往依赖预先定义的图形, 进行不同的空间和时间更新, 但没有在每一个时间步骤中建立每个变量之间的直接联系。 本文通过将多变量预测转换成一个连续时间序列来解决这些问题, 每个变换器输入符号代表特定时间单个变量的价值。 长期变换器随后可以学习空间、 时间 和 价值 信息在这个扩展的序列中共同互动 。 我们称之为Spacetimeexer的方法, 在从流量预测到电力需求和天气预测的基准上取得竞争性的结果, 同时学习完全从数据中完全连接的波段关系 。