Various deep learning models, especially some latest Transformer-based approaches, have greatly improved the state-of-art performance for long-term time series forecasting.However, those transformer-based models suffer a severe deterioration performance with prolonged input length, which prohibits them from using extended historical info.Moreover, these methods tend to handle complex examples in long-term forecasting with increased model complexity, which often leads to a significant increase in computation and less robustness in performance(e.g., overfitting). We propose a novel neural network architecture, called TreeDRNet, for more effective long-term forecasting. Inspired by robust regression, we introduce doubly residual link structure to make prediction more robust.Built upon Kolmogorov-Arnold representation theorem, we explicitly introduce feature selection, model ensemble, and a tree structure to further utilize the extended input sequence, which improves the robustness and representation power of TreeDRNet. Unlike previous deep models for sequential forecasting work, TreeDRNet is built entirely on multilayer perceptron and thus enjoys high computational efficiency. Our extensive empirical studies show that TreeDRNet is significantly more effective than state-of-the-art methods, reducing prediction errors by 20% to 40% for multivariate time series. In particular, TreeDRNet is over 10 times more efficient than transformer-based methods. The code will be released soon.
翻译:各种深层次的学习模型,特别是一些最新的基于变异器的方法,大大改善了长期时间序列预测的最先进的性能。然而,这些变异器模型的性能严重恶化,投入时间长,无法使用长期历史信息。此外,这些方法往往会处理长期预测中的复杂例子,模型复杂性增加,往往导致计算量的大幅度增加,性能(例如,过度装配)不那么强。我们提议建立一个新型的神经网络结构,称为TreaDRNet,以进行更有效的长期预测。在强劲回归的启发下,我们引入双重剩余链接结构,使预测更加有力。在Kolmogorov-Arnold的代表性理论中,我们明确引入特性选择、模型和树结构,以进一步利用扩大的输入序列,这往往导致TreaDRNet的稳健性和代表力(例如,过度装配配配配)。与以往的更深模型不同,TreaDRNet完全建在多层次的视点上,从而享有很高的计算效率。我们广泛的实验性研究显示,在KoldoDRNet上, 20 % 快速的变现为特定的模型,比降为40号的模型要更有效。