How to handle time features shall be the core question of any time series forecasting model. Ironically, it is often ignored or misunderstood by deep-learning based models, even those baselines which are state-of-the-art. This behavior makes their inefficient, untenable and unstable. In this paper, we rigorously analyze three prevalent but deficient/unfounded deep time series forecasting mechanisms or methods from the view of time series properties, including normalization methods, multivariate forecasting and input sequence length. Corresponding corollaries and solutions are given on both empirical and theoretical basis. We thereby propose a novel time series forecasting network, i.e. RTNet, on the basis of aforementioned analysis. It is general enough to be combined with both supervised and self-supervised forecasting format. Thanks to the core idea of respecting time series properties, no matter in which forecasting format, RTNet shows obviously superior forecasting performances compared with dozens of other SOTA time series forecasting baselines in three real-world benchmark datasets. By and large, it even occupies less time complexity and memory usage while acquiring better forecasting accuracy. The source code is available at https://github.com/OrigamiSL/RTNet.
翻译:具有讽刺意味的是,它常常被深学习基础模型,甚至是最先进的基线所忽视或误解。这种行为使得其效率低、难以维持和不稳定。在本文中,我们从时间序列属性的角度,严格分析三种普遍但不足/缺乏的深时间序列预测机制或方法,包括正常化方法、多变量预测和输入序列长度。在经验和理论的基础上给出了相应的编码和解决方案。因此,我们根据上述分析,提出了一个新的时间序列预测网络,即RTNet。它很一般地与受监督和自我监督的预测格式相结合。由于尊重时间序列特性的核心理念,不管在何种预测格式中,RTNet显示的预测性能明显优于三个真实世界基准数据集中其他SOTA时间序列基线的数十个。从总体上看,它甚至在获得更好的预测准确性的同时,占用了时间序列的复杂性和记忆的使用。源代码可在https://github.com/Origami/Net上查阅。