Many machine learning problems can be framed in the context of estimating functions, and often these are time-dependent functions that are estimated in real-time as observations arrive. Gaussian processes (GPs) are an attractive choice for modeling real-valued nonlinear functions due to their flexibility and uncertainty quantification. However, the typical GP regression model suffers from several drawbacks: 1) Conventional GP inference scales $O(N^{3})$ with respect to the number of observations; 2) Updating a GP model sequentially is not trivial; and 3) Covariance kernels typically enforce stationarity constraints on the function, while GPs with non-stationary covariance kernels are often intractable to use in practice. To overcome these issues, we propose a sequential Monte Carlo algorithm to fit infinite mixtures of GPs that capture non-stationary behavior while allowing for online, distributed inference. Our approach empirically improves performance over state-of-the-art methods for online GP estimation in the presence of non-stationarity in time-series data. To demonstrate the utility of our proposed online Gaussian process mixture-of-experts approach in applied settings, we show that we can sucessfully implement an optimization algorithm using online Gaussian process bandits.
翻译:许多机器学习问题可以放在估计功能的背景下,而且往往是在观测到达时实时估计的、时间依赖的职能。Gossian进程(GPs)因其灵活性和不确定性的量化,是模拟实际价值非线性功能的有吸引力的选择,但典型的GP回归模型存在若干缺点:(1)常规GP推论尺度$O(N ⁇ 3}),与观测数量有关;(2)按顺序更新GP模型并非微不足道;和(3)常态内核通常会对该功能施加固定性限制,而非静态内核的GPs往往难以在实践中使用。为了克服这些问题,我们建议按顺序计算Monte Carlo算法,以适应无限的GPs混合物混合物组合,既能捕捉非静态行为,又能进行在线推论。我们的方法经验改进了在时间序列数据中出现非固定性情况下对GPS估算的状态方法的性能。为了展示我们提议的在线GAGAserus流程的实用性,我们运用了在线升级方法,我们运用了在线的Basimal-assimations。