Time-series representation learning can extract representations from data with temporal dynamics and sparse labels. When labeled data are sparse but unlabeled data are abundant, contrastive learning, i.e., a framework to learn a latent space where similar samples are close to each other while dissimilar ones are far from each other, has shown outstanding performance. This strategy can encourage varied consistency of time-series representations depending on the positive pair selection and contrastive loss. We propose a new time-series representation learning method by combining the advantages of self-supervised tasks related to contextual, temporal, and transformation consistency. It allows the network to learn general representations for various downstream tasks and domains. Specifically, we first adopt data preprocessing to generate positive and negative pairs for each self-supervised task. The model then performs contextual, temporal, and transformation contrastive learning and is optimized jointly using their contrastive losses. We further investigate an uncertainty weighting approach to enable effective multi-task learning by considering the contribution of each consistency. We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection. Experimental results show that our method not only outperforms the benchmark models on these downstream tasks, but also shows efficiency in cross-domain transfer learning.
翻译:时间序列代表性学习可以从具有时间动态和差异标签的数据中摘取体现。当标签数据稀少但没有标签的数据非常丰富时,有对比性学习,即一个学习潜在空间的框架,在这个空间里,相似的样本彼此接近,而不同的样本彼此相距很远,这种框架显示了杰出的绩效。这一战略可以鼓励时间序列代表性的各种不同一致性,这取决于正对选择和对比性损失。我们提出一个新的时间序列代表性学习方法,结合与背景、时间和转型一致性有关的自我监督任务的好处。它使网络能够学习各种下游任务和领域的总体代表性。具体地说,我们首先采用数据预处理方法,为每项自我监督的任务产生正对和负对对。然后模型进行背景、时间和变异性学习,并使用其对比性损失共同优化。我们进一步调查不确定性加权方法,以便通过考虑每个一致性的贡献,有效地进行多任务学习。我们评估了三个下游任务的拟议框架:时间序列分类、预测和异常检测。实验结果显示,我们的方法不仅超越了这些基准性模式的下游转移。我们的方法还展示了这些模式。</s>