Time series generation (TSG) studies have mainly focused on the use of Generative Adversarial Networks (GANs) combined with recurrent neural network (RNN) variants. However, the fundamental limitations and challenges of training GANs still remain. In addition, the RNN-family typically has difficulties with temporal consistency between distant timesteps. Motivated by the successes in the image generation (IMG) domain, we propose TimeVQVAE, the first work, to our knowledge, that uses vector quantization (VQ) techniques to address the TSG problem. Moreover, the priors of the discrete latent spaces are learned with bidirectional transformer models that can better capture global temporal consistency. We also propose VQ modeling in a time-frequency domain, separated into low-frequency (LF) and high-frequency (HF). This allows us to retain important characteristics of the time series and, in turn, generate new synthetic signals that are of better quality, with sharper changes in modularity, than its competing TSG methods. Our experimental evaluation is conducted on all datasets from the UCR archive, using well-established metrics in the IMG literature, such as Fr\'echet inception distance and inception scores. Our implementation on GitHub: \url{https://github.com/ML4ITS/TimeVQVAE}.
翻译:时间序列生成(TSG)研究主要关注于将生成式对抗网络(GANs)与递归神经网络(RNN)变体相结合。然而,训练GANs的根本限制和挑战仍然存在。此外,RNN系列通常会在远距时间步之间出现时间连贯性的困难。受图像生成(IMG)领域的成功启发,我们提出了TimeVQVAE,据我们所知,这是第一篇使用向量量化(VQ)技术解决TSG问题的论文。此外,离散潜在空间的先验知识是用双向变压器模型学习的,这能更好地捕捉全局时间连贯性。我们还提出了时频域内的VQ建模,分为低频(LF)和高频(HF)两个部分。这允许我们保留时间序列的重要特征,并生成质量更好、调制更锐利的新合成信号,超过了竞争的TSG方法。我们在所有UCR档案中的数据集上进行了实验评估,使用转入分数和转入得分等IMG文献中的成熟度量。我们的Github实施: \url{https://github.com/ML4ITS/TimeVQVAE}。