Implicit processes (IPs) are a generalization of Gaussian processes (GPs). IPs may lack a closed-form expression but are easy to sample from. Examples include, among others, Bayesian neural networks or neural samplers. IPs can be used as priors over functions, resulting in flexible models with well-calibrated prediction uncertainty estimates. Methods based on IPs usually carry out function-space approximate inference, which overcomes some of the difficulties of parameter-space approximate inference. Nevertheless, the approximations employed often limit the expressiveness of the final model, resulting, \emph{e.g.}, in a Gaussian predictive distribution, which can be restrictive. We propose here a multi-layer generalization of IPs called the Deep Variational Implicit process (DVIP). This generalization is similar to that of deep GPs over GPs, but it is more flexible due to the use of IPs as the prior distribution over the latent functions. We describe a scalable variational inference algorithm for training DVIP and show that it outperforms previous IP-based methods and also deep GPs. We support these claims via extensive regression and classification experiments. We also evaluate DVIP on large datasets with up to several million data instances to illustrate its good scalability and performance.
翻译:隐含过程(IPs)是高斯进程(GPs)的概括性。 IPs可能缺乏封闭式表达式,但很容易从中取样。 例如,除其他外,包括巴伊西亚神经网络或神经取样器。 IPs可以用作功能的先期用途, 从而形成灵活的模型, 并得出精确的预测不确定性估计值。 基于IPs的方法通常具有功能-空间近似推理, 克服了参数-空间近似推理的某些困难。 然而, 所使用的近似往往限制最终模型的清晰度, 其结果为\emph{e.g.}, 在高斯预测性分布中, 并且可以有限制性。 我们在此建议多层次地概括IPs, 称为深变暗的预测性进程(DVIP) 。 这种概括性类似于深度的GPs 相对于GPs 的深度推导算法, 但是由于IPs 的先前分布在潜在函数上的分布, 因而更加灵活。 我们描述导导导导导DVIPPs 和深层次数据演示中显示我们先前的精确性数据。