Asynchronous time series are often observed in several applications such as health care, astronomy, and climate science, and pose a significant challenge to the standard deep learning architectures. Interpolation of asynchronous time series is vital for many real-world tasks like root cause analysis, and medical diagnosis. In this paper, we propose a novel encoder-decoder architecture called Tripletformer, which works on the set of observations where each set element is a triple of time, channel, and value, for the probabilistic interpolation of the asynchronous time series. Both the encoder and the decoder of the Tripletformer are modeled using attention layers and fully connected layers and are invariant to the order in which set elements are presented. The proposed Tripletformer is compared with a range of baselines over multiple real-world and synthetic asynchronous time series datasets, and the experimental results attest that it produces more accurate and certain interpolations. We observe an improvement in negative loglikelihood error up to 33% over real and 800% over synthetic asynchronous time series datasets compared to the state-of-the-art model using the Tripletformer.
翻译:在诸如保健、天文学和气候科学等若干应用中,经常观察到非同步的时间序列,这对标准的深层次学习结构构成重大挑战。对不同步的时间序列进行内插对于许多现实世界的任务至关重要,例如根本原因分析和医学诊断。在本文中,我们提议了一个叫作Tripletext的新颖的编码器脱coder结构,该结构在每组元素具有三倍时间、通道和价值的观察组上运作,对于不同步时间序列的概率性内插。我们观察到,三重星的编码器和分解器都使用关注层和完全相连的层来模拟,并且与提出设定元素的顺序不相适应。拟议的三重元模型与数个现实世界和合成无同步时间序列数据集的一系列基线进行比较,实验结果证明,它产生更准确和某些内部的测数。我们观察到,负对正对齐的误差在超过实时和800%的合成同步时间序列上有所改进,使用三重模型数据对状态进行比较。