Implicit Processes (IPs) represent a flexible framework that can be used to describe a wide variety of models, from Bayesian neural networks, neural samplers and data generators to many others. IPs also allow for approximate inference in function-space. This change of formulation solves intrinsic degenerate problems of parameter-space approximate inference concerning the high number of parameters and their strong dependencies in large models. For this, previous works in the literature have attempted to employ IPs both to set up the prior and to approximate the resulting posterior. However, this has proven to be a challenging task. Existing methods that can tune the prior IP result in a Gaussian predictive distribution, which fails to capture important data patterns. By contrast, methods producing flexible predictive distributions by using another IP to approximate the posterior process cannot tune the prior IP to the observed data. We propose here the first method that can accomplish both goals. For this, we rely on an inducing-point representation of the prior IP, as often done in the context of sparse Gaussian processes. The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.
翻译:隐含过程(IPs)是一个灵活的框架,可以用来描述从贝伊西亚神经网络、神经采样器和数据生成器到许多其他模型的多种模型。IPs还允许在功能空间中大致推断。这种配方的改变解决了参数-空间的参数-空间近似推论的内在退化问题,即参数-空间参数数量之多及其在大型模型中的强烈依赖性。为此,文献中的以往著作试图使用IPs来设置前置和近似后置物。然而,这证明是一项具有挑战性的任务。现有的方法可以调和先前的IPs结果,从而得出一个无法捕捉到重要数据模式的高斯预测分布结果。相比之下,通过使用另一种IP方法产生灵活的预测性分布,而使用另一种IP-空间来接近后传过程,我们在此提出能够实现这两个目标的第一种方法。在这方面,我们依靠前方IP的诱导点代表,正如在稀疏高的工艺中经常做的那样。结果是一种可缩缩略的方法,用来预测IP-silations 和IPG的先前的分布提供了准确性参数。