As Gaussian processes are used to answer increasingly complex questions, analytic solutions become scarcer and scarcer. Monte Carlo methods act as a convenient bridge for connecting intractable mathematical expressions with actionable estimates via sampling. Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations. This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector. These methods are prohibitively expensive in cases where we would, ideally, like to draw high-dimensional vectors or even continuous sample paths. In this work, we investigate a different line of reasoning: rather than focusing on distributions, we articulate Gaussian conditionals at the level of random variables. We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors. Starting from first principles, we derive these methods and analyze the approximation errors they introduce. We, then, ground these results by exploring the practical implications of pathwise conditioning in various applied settings, such as global optimization and reinforcement learning.
翻译:由于高斯进程被用来回答日益复杂的问题,分析性解决办法变得日益稀缺和稀缺。蒙特卡洛方法成为连接棘手数学表达方式和通过取样可采取行动的估计数字的方便桥梁。模拟高斯进程后继者的常规方法将样品视为从有限输入地点的流程值边际分布中提取的样本。这种以分布为中心的特性导致不同比例的基因化战略,以理想的随机矢量大小为尺度。在理想情况下,这些方法过于昂贵,例如绘制高维矢量或甚至连续采样路径。在这项工作中,我们调查不同的推理线:我们不是侧重于分布,而是在随机变量一级说明高斯的有条件条件。我们展示了这种对调子的路径性解释是如何形成一个总体近似组合,从而能够有效地采样高斯进程后导体。我们从最初的原则出发,分析这些方法的近似错误。然后,我们通过探索不同应用环境的路径调节方法的实际影响来得出这些结果,例如全球优化和强化学习。