This paper presents TS2Vec, a universal framework for learning representations of time series in an arbitrary semantic level. Unlike existing methods, TS2Vec performs contrastive learning in a hierarchical way over augmented context views, which enables a robust contextual representation for each timestamp. Furthermore, to obtain the representation of an arbitrary sub-sequence in the time series, we can apply a simple aggregation over the representations of corresponding timestamps. We conduct extensive experiments on time series classification tasks to evaluate the quality of time series representations. As a result, TS2Vec achieves significant improvement over existing SOTAs of unsupervised time series representation on 125 UCR datasets and 29 UEA datasets. The learned timestamp-level representations also achieve superior results in time series forecasting and anomaly detection tasks. A linear regression trained on top of the learned representations outperforms previous SOTAs of time series forecasting. Furthermore, we present a simple way to apply the learned representations for unsupervised anomaly detection, which establishes SOTA results in the literature. The source code is publicly available at https://github.com/yuezhihan/ts2vec.
翻译:本文介绍了TS2Vec, 这是一个在任意的语义层次上学习时间序列的通用框架。 与现有方法不同, TS2Vec 与现有方法不同, TS2Vec 进行对比性学习时序的层次化学习,与扩大的语境视图不同,使每个时间戳都能够有一个强有力的背景代表。 此外,为了在时间序列中获得任意的次序列,我们可以对相应的时间序列的表示进行简单的汇总。 我们对时间序列分类任务进行了广泛的实验,以评估时间序列的表述质量。 因此, TS2Vec 与现有的SOTA相比,在125 UCR数据集和29 UEA数据集上,在未经监督的时间序列中实现了显著改进。 所学的时间戳级代表在时间序列预测和异常探测任务方面也取得了优异的结果。 在所学的表述上经过培训的线性回归超过了以前的SOTA系列预测。 此外,我们提出了一个简单的方法,将所学的描述用于不超的异常现象探测,从而建立SOTA的结果。 源代码可在 https://github. comzhihans.