Not all positive pairs are beneficial to time series contrastive learning. In this paper, we study two types of bad positive pairs that impair the quality of time series representation learned through contrastive learning ($i.e.$, noisy positive pair and faulty positive pair). We show that, with the presence of noisy positive pairs, the model tends to simply learn the pattern of noise (Noisy Alignment). Meanwhile, when faulty positive pairs arise, the model spends considerable efforts aligning non-representative patterns (Faulty Alignment). To address this problem, we propose a Dynamic Bad Pair Mining (DBPM) algorithm, which reliably identifies and suppresses bad positive pairs in time series contrastive learning. DBPM utilizes a memory module to track the training behavior of each positive pair along training process. This allows us to identify potential bad positive pairs at each epoch based on their historical training behaviors. The identified bad pairs are then down-weighted using a transformation module. Our experimental results show that DBPM effectively mitigates the negative impacts of bad pairs, and can be easily used as a plug-in to boost performance of state-of-the-art methods. Codes will be made publicly available.
翻译:并非所有正对都有利于时间序列对比式学习。 在本文中, 我们研究两种通过对比式学习( $, 噪音正对和差错正对) 所学到的时间序列代表质量受损的坏对。 我们显示, 有了噪音正对, 模型往往只是学习噪音模式( 噪音正对 ) 。 与此同时, 当错误正对出现时, 模型花费了大量精力来调整非代表性模式( 错误对对齐 ) 。 为了解决这个问题, 我们提出一种动态坏对流算法, 可靠地识别和抑制在时间序列对比式学习中坏对口的坏对口。 DBPM 使用一个记忆模块来跟踪每个正对口在培训过程中的培训行为。 这使我们能够根据他们的历史培训行为来找出每个小区潜在的坏对口。 被识别的坏对口随后使用一个转换模块来降低其体重。 我们的实验结果表明, DBPM 有效减轻坏对口的负面影响, 并且可以很容易地用来作为插件的州立法典。