Transformers have demonstrated impressive strength in long-term series forecasting. Existing prediction research mostly focused on mapping past short sub-series (lookback window) to future series (forecast window). The longer training dataset time series will be discarded, once training is completed. Models can merely rely on lookback window information for inference, which impedes models from analyzing time series from a global perspective. And these windows used by Transformers are quite narrow because they must model each time-step therein. Under this point-wise processing style, broadening windows will rapidly exhaust their model capacity. This, for fine-grained time series, leads to a bottleneck in information input and prediction output, which is mortal to long-term series forecasting. To overcome the barrier, we propose a brand-new methodology to utilize Transformer for time series forecasting. Specifically, we split time series into patches by day and reform point-wise to patch-wise processing, which considerably enhances the information input and output of Transformers. To further help models leverage the whole training set's global information during inference, we distill the information, store it in time representations, and replace series with time representations as the main modeling entities. Our designed time-modeling Transformer -- Dateformer yields state-of-the-art accuracy on 7 real-world datasets with a 33.6\% relative improvement and extends the maximum forecast range to half-year.
翻译:在长期序列预测中,变换者表现出了令人印象深刻的力量。现有的预测研究主要侧重于将过去短的子系列(回视窗口)与未来序列(前置窗口)进行绘图。一旦培训完成,将放弃较长的培训数据集时间序列。模型只能依靠回溯窗口信息进行推论,这妨碍模型从全球角度分析时间序列。变换者使用的这些窗口非常狭窄,因为它们必须建模其中的每个时间步骤。在这种点对点处理方式下,扩大窗口将迅速耗尽其模型能力。对于精细的时间序列,这将导致信息输入和预测产出的瓶颈,而信息输入和预测产出对于长期序列预测来说是致命的。为了克服障碍,我们建议采用全新的方法利用变换器进行时间序列预测。具体地说,我们将时间序列分成每天的补丁和改革点,以补全方式处理,这大大增强了变换者的信息投入和输出。为了进一步帮助模型在推断期间利用整个培训组的全球信息,我们将其储存在时间序列中,将信息储存在时间图中,将它储存到长期序列与长期序列预报中。为了克服屏障,我们所设计的时程,我们设计了时程中的数据模型,将更新为33年期模型,以更新模型,将更新了我们所设计的时程。