Automatic forecasting is the task of receiving a time series and returning a forecast for the next time steps without any human intervention. Gaussian Processes (GPs) are a powerful tool for modeling time series, but so far there are no competitive approaches for automatic forecasting based on GPs. We propose practical solutions to two problems: automatic selection of the optimal kernel and reliable estimation of the hyperparameters. We propose a fixed composition of kernels, which contains the components needed to model most time series: linear trend, periodic patterns, and other flexible kernel for modeling the non-linear trend. Not all components are necessary to model each time series; during training the unnecessary components are automatically made irrelevant via automatic relevance determination (ARD). We moreover assign priors to the hyperparameters, in order to keep the inference within a plausible range; we design such priors through an empirical Bayes approach. We present results on many time series of different types; our GP model is more accurate than state-of-the-art time series models. Thanks to the priors, a single restart is enough the estimate the hyperparameters; hence the model is also fast to train.
翻译:自动预测是接收一个时间序列并返回下一个时间步骤的预测的任务。 Gossian processes (GPs) 是模拟时间序列的有力工具, 但到目前为止还没有基于GP的自动预测的竞争性方法。 我们为两个问题提出了切实可行的解决办法: 自动选择最佳内核和可靠估计超参数。 我们提出一个固定的内核组成, 其中包括模型最短时间序列所需的组件: 线性趋势、 定期模式和用于模拟非线性趋势的其他灵活内核。 并不是每个时间序列都需要所有组成部分的模型; 在培训过程中, 通过自动确定相关性( ARD) 自动使不必要的部件变得不相干。 我们还指定了超参数之前的参数, 以便将推断保持在一个合理的范围内; 我们通过经验性海湾方法设计这样的前数。 我们提出不同类型许多时间序列的结果; 我们的GP模型比最新时间序列模型更精确。 由于前几次, 单次重电压模型也足够快速。