Contrastive learning, a self-supervised learning method that can learn representations from unlabeled data, has been developed promisingly. Many methods of contrastive learning depend on data augmentation techniques, which generate different views from the original signal. However, tuning policies and hyper-parameters for more effective data augmentation methods in contrastive learning is often time and resource-consuming. Researchers have designed approaches to automatically generate new views for some input signals, especially on the image data. But the view-learning method is not well developed for time-series data. In this work, we propose a simple but effective module for automating view generation for time-series data in contrastive learning, named learning views for time-series data (LEAVES). The proposed module learns the hyper-parameters for augmentations using adversarial training in contrastive learning. We validate the effectiveness of the proposed method using multiple time-series datasets. The experiments demonstrate that the proposed method is more effective in finding reasonable views and performs downstream tasks better than the baselines, including manually tuned augmentation-based contrastive learning methods and SOTA methods.
翻译:反向学习是一种自我监督的学习方法,可以从未贴标签的数据中学习表达方式。许多对比式学习方法都取决于数据增强技术,这些方法产生与原始信号不同的观点。然而,在对比式学习中,调整政策和超参数以采用更有效的数据增强方法往往既费时又耗费资源。研究人员设计了自动生成一些输入信号新观点的方法,特别是在图像数据上。但对于时间序列数据而言,视觉学习方法没有很好地开发出来。在这项工作中,我们提出了一个简单而有效的模块,用于在对比式学习中为时间序列数据自动生成视图,为时间序列数据(LEAVES)命名的学习观点。拟议模块利用对比式学习的对抗性培训学习超参数。我们用多个时间序列数据集验证了拟议方法的有效性。实验表明,拟议方法在寻找合理观点和完成比基线更好的下游任务方面更为有效,包括手动调整的增强型对比式学习方法和SOTA方法。