Contrastive learning methods have shown an impressive ability to learn meaningful representations for image or time series classification. However, these methods are less effective for time series forecasting, as optimization of instance discrimination is not directly applicable to predicting the future state from the history context. Moreover, the construction of positive and negative pairs in current technologies strongly relies on specific time series characteristics, restricting their generalization across diverse types of time series data. To address these limitations, we propose SimTS, a simple representation learning approach for improving time series forecasting by learning to predict the future from the past in the latent space. SimTS does not rely on negative pairs or specific assumptions about the characteristics of the particular time series. Our extensive experiments on several benchmark time series forecasting datasets show that SimTS achieves competitive performance compared to existing contrastive learning methods. Furthermore, we show the shortcomings of the current contrastive learning framework used for time series forecasting through a detailed ablation study. Overall, our work suggests that SimTS is a promising alternative to other contrastive learning approaches for time series forecasting.
翻译:对比学习方法已经展现出对于图像或时间序列分类学习有非凡表现的能力。然而,由于实例判别的优化不直接适用于预测未来状态,这些方法对于时间序列预测的效果较差。此外,在当前技术中,正负例对的构建非常依赖于时间序列的特征,限制了它们在多种类型的时间序列数据之间的推广。为解决这些限制,我们提出了SimTS,这是一种通过在潜在空间中学习从过去预测未来的简单表示学习方法,可用于改进时间序列预测。 SimTS不依赖于负对或对于特定时间序列的特殊假设。我们在几个基准时间序列预测数据集上进行了广泛的实验,结果表明SimTS与现有对比学习方法具有竞争性的表现。此外,通过详细的消融研究,我们展示了目前用于时间序列预测的对比学习框架的不足之处。总体而言,我们的工作表明SimTS是用于时间序列预测的其他对比学习方法的有前途的替代方法。