Learning representation from unlabeled time series data is a challenging problem. Most existing self-supervised and unsupervised approaches in the time-series domain do not capture low and high-frequency features at the same time. Further, some of these methods employ large scale models like transformers or rely on computationally expensive techniques such as contrastive learning. To tackle these problems, we propose a non-contrastive self-supervised learning approach efficiently captures low and high-frequency time-varying features in a cost-effective manner. Our method takes raw time series data as input and creates two different augmented views for two branches of the model, by randomly sampling the augmentations from same family. Following the terminology of BYOL, the two branches are called online and target network which allows bootstrapping of the latent representation. In contrast to BYOL, where a backbone encoder is followed by multilayer perceptron (MLP) heads, the proposed model contains additional temporal convolutional network (TCN) heads. As the augmented views are passed through large kernel convolution blocks of the encoder, the subsequent combination of MLP and TCN enables an effective representation of low as well as high-frequency time-varying features due to the varying receptive fields. The two modules (MLP and TCN) act in a complementary manner. We train an online network where each module learns to predict the outcome of the respective module of target network branch. To demonstrate the robustness of our model we performed extensive experiments and ablation studies on five real-world time-series datasets. Our method achieved state-of-art performance on all five real-world datasets.
翻译:从未贴标签的时间序列数据中学习是一个具有挑战性的问题。 在时间序列域中,大多数现有的自监管和未经监管的方法并不同时包含低频和高频特性。 此外,有些方法使用变压器等大型模型,或者依赖成本昂贵的计算技术,例如对比学习。为了解决这些问题,我们建议采用非互动的自监管学习方法,以具有成本效益的方式有效捕捉低频和高频时间变化的特性。我们的方法将原始时间序列数据作为输入,并且通过随机抽样从同一组中抽取两个模型分支的扩大视图。按照BYOL的术语,这两个分支被称为在线和目标网络,可以对潜在代表进行示意。与BYOL不同的是,一个骨干编码器被多层透镜(MLP)头跟踪,拟议模型包含额外的时间变动模型网络(TCN)头。随着对模型的输入,通过该模型的州内剖面构造块进行两种不同的剖面图,随后将MLP和TCN的网络的连续组合,可以有效地展示我们各自的五级模型。