Time is one of the most significant characteristics of time-series, yet has received insufficient attention. Prior time-series forecasting research has mainly focused on mapping a past subseries (lookback window) to a future series (forecast window), and time of series often just play an auxiliary role even completely ignored in most cases. Due to the point-wise processing within these windows, extrapolating series to longer-term future is tough in the pattern. To overcome this barrier, we propose a brand-new time-series forecasting framework named Dateformer who turns attention to modeling time instead of following the above practice. Specifically, time-series are first split into patches by day to supervise the learning of dynamic date-representations with Date Encoder Representations from Transformers (DERT). These representations are then fed into a simple decoder to produce a coarser (or global) prediction, and used to help the model seek valuable information from the lookback window to learn a refined (or local) prediction. Dateformer obtains the final result by summing the above two parts. Our empirical studies on seven benchmarks show that the time-modeling method is more efficient for long-term series forecasting compared with sequence modeling methods. Dateformer yields state-of-the-art accuracy with a 40% remarkable relative improvement, and broadens the maximum credible forecasting range to a half-yearly level.
翻译:时间序列前的时间序列预测研究主要侧重于将过去的子序列(后退窗口)与未来序列(前置窗口)进行绘图,而序列的时间往往只是一个辅助作用,在多数情况下甚至完全忽视。由于这些窗口内部的简单处理方法,从序列推算到长期未来是困难的。为了克服这一障碍,我们提议了一个名为“日期序列”的新时间序列预测框架,他将注意力转向模拟时间而不是遵循上述做法。具体地说,时间序列首先分成一天,以监督从变换者(DERT)对日期编码显示的动态日期表示的学习。这些表达随后被输入一个简单的解码器,以产生粗化(或全球)预测,并用来帮助模型从后退窗口寻找有价值的信息,以获得更精确的(或本地的)预测。日期序列通过对以上两个部分进行总结获得最终结果。我们关于七个基准的实证研究显示,时间序列的建模方法将更高效地分解析与日期编码的最近40年期预测序列相比,一个令人瞩目的中期预测。