Time series models aim for accurate predictions of the future given the past, where the forecasts are used for important downstream tasks like business decision making. In practice, deep learning based time series models come in many forms, but at a high level learn some continuous representation of the past and use it to output point or probabilistic forecasts. In this paper, we introduce a novel autoregressive architecture, VQ-AR, which instead learns a \emph{discrete} set of representations that are used to predict the future. Extensive empirical comparison with other competitive deep learning models shows that surprisingly such a discrete set of representations gives state-of-the-art or equivalent results on a wide variety of time series datasets. We also highlight the shortcomings of this approach, explore its zero-shot generalization capabilities, and present an ablation study on the number of representations. The full source code of the method will be available at the time of publication with the hope that researchers can further investigate this important but overlooked inductive bias for the time series domain.
翻译:鉴于过去,时间序列模型旨在准确预测未来,预测用于商业决策等重要的下游任务。实践上,深层次学习基于时间序列模型以多种形式出现,但高层次却学会了过去的一些连续表现,并将其用于输出点或概率预测。在本文中,我们引入了一个新的自动递减结构VQ-AR, 而不是学习用来预测未来的一套表达方式。与其他竞争性深层次学习模型的广泛经验比较表明,令人惊讶的是,这种离散的表达方式在一系列广泛的时间序列数据集中产生了最新或同等的结果。我们还强调了这一方法的缺点,探索其零弹射的概括能力,并对表述的数量进行缩略研究。在出版时,将可获得这种方法的全部源代码,希望研究人员能够进一步调查时间序列域中这一重要但被忽视的诱导偏差。