Deep Gaussian Processes (DGPs) are multi-layer, flexible extensions of Gaussian processes but their training remains challenging. Sparse approximations simplify the training but often require optimization over a large number of inducing inputs and their locations across layers. In this paper, we simplify the training by setting the locations to a fixed subset of data and sampling the inducing inputs from a variational distribution. This reduces the trainable parameters and computation cost without significant performance degradations, as demonstrated by our empirical results on regression problems. Our modifications simplify and stabilize DGP training while making it amenable to sampling schemes for setting the inducing inputs.
翻译:深海高斯进程(DGPs)是高斯进程多层次、灵活的扩展,但是其培训仍然具有挑战性。粗略的近似点简化了培训,但往往需要对大量引素投入及其跨层位置进行优化。在本文件中,我们简化了培训,将地点设置为固定的一组数据,并抽样从变式分布中引素投入。这降低了可培训参数和计算成本,而没有显著的性能退化,正如我们在回归问题上的经验性能退化所证明的那样。我们的修改简化和稳定了DGP培训,同时使其在设定引素投入时适应抽样计划。