Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the \textit{long-term forecasting} problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Towards these challenges, we propose Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We go beyond the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.
翻译:延长预测时间是真正应用的关键需求,例如极端天气预警和长期能源消耗规划。本文件研究时间序列中的时间序列问题。 先前的变异模型采用各种自留机制发现长期依赖性。 然而,长期未来的复杂时间模式使模型无法找到可靠的依赖性。 此外,变异器必须采用稀缺的微小版本的自控长期序列效率,从而导致信息利用瓶颈。为了应对这些挑战,我们提议自动变异为具有自动校正机制的新型拆解结构。我们超越了一系列变异性预处理会议,并把它翻新为深层模型的基本内部块。这种设计使自动变异能能够找到可靠的依赖性。此外,在随机过程理论的启发下,我们根据序列周期设计了自动校正机制,在次序列一级进行依赖性发现和代表整合。 自动变异异性模型, 以及长期性预测, 长期性预测, 以及长期性预测, 以及长期性预测, 以及长期性预测, 以及长期性预测。