Adaptive learning is necessary for non-stationary environments where the learning machine needs to forget past data distribution. Efficient algorithms require a compact model update to not grow in computational burden with the incoming data and with the lowest possible computational cost for online parameter updating. Existing solutions only partially cover these needs. Here, we propose the first adaptive sparse Gaussian Process (GP) able to address all these issues. We first reformulate a variational sparse GP algorithm to make it adaptive through a forgetting factor. Next, to make the model inference as simple as possible, we propose updating a single inducing point of the sparse GP model together with the remaining model parameters every time a new sample arrives. As a result, the algorithm presents a fast convergence of the inference process, which allows an efficient model update (with a single inference iteration) even in highly non-stationary environments. Experimental results demonstrate the capabilities of the proposed algorithm and its good performance in modeling the predictive posterior in mean and confidence interval estimation compared to state-of-the-art approaches.
翻译:对于学习机器需要忘记过去的数据分布的非静止环境来说,适应性学习是必要的。有效的算法要求一个紧凑的模型更新,以免随着输入的数据和尽可能最低的计算成本而增加计算负担,从而增加在线参数更新的计算负担。现有的解决方案只部分满足这些需要。在这里,我们提议第一个适应性稀疏的高斯进程(GP),它能够解决这些问题。我们首先修改一个变式稀疏的GP算法,以便通过一个遗忘的因素使其适应。接下来,为了使模型推理尽可能简单,我们提议每到一个新的样本,就更新一个稀少的GP模型的单一导点,同时更新其余的模型参数。结果,该算法呈现了快速的推论过程,使得即使在高度不固定的环境中也能进行高效的模型更新(单一推论迭)。实验结果表明拟议的算法的能力及其在模拟中平均和信任期估计的预测远值与最先进的方法相比,其良好的性能。