Learning decent representations from unlabeled time-series data with temporal dynamics is a very challenging task. In this paper, we propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data. First, the raw time-series data are transformed into two different yet correlated views by using weak and strong augmentations. Second, we propose a novel temporal contrasting module to learn robust temporal representations by designing a tough cross-view prediction task. Last, to further learn discriminative representations, we propose a contextual contrasting module built upon the contexts from the temporal contrasting module. It attempts to maximize the similarity among different contexts of the same sample while minimizing similarity among contexts of different samples. Experiments have been carried out on three real-world time-series datasets. The results manifest that training a linear classifier on top of the features learned by our proposed TS-TCC performs comparably with the supervised training. Additionally, our proposed TS-TCC shows high efficiency in few-labeled data and transfer learning scenarios. The code is publicly available at https://github.com/emadeldeen24/TS-TCC.
翻译:从带有时间动态的未贴标签的时间序列数据中学习体面的表述是一个非常艰巨的任务。 在本文件中,我们提议通过时间和背景对比模式(TS-TCC),从未贴标签的数据中学习时间序列代表。首先,原始时间序列数据通过使用弱强增强力,转换成两种不同但相互关联的观点。第二,我们提议一个新颖的时间对比模块,通过设计严格的交叉视图预测任务,学习稳健的时间序列代表。最后,为了进一步学习区别性表述,我们提议一个基于时间对比模块背景的背景对比模块。我们试图尽量扩大同一样本不同背景的相似性,同时尽量减少不同样本环境的相似性。实验是在三个现实世界时间序列数据集上进行的。结果显示,我们拟议的TS-TCC在所学到的特征之上培训与监督培训相匹配。此外,我们提议的TS-TC显示的TS-TC在少数标签数据和转移学习情景中的效率很高。代码在 http://girimald/Cen-treen-TC 上公开提供。