Gaussian processes (GPs), implemented through multivariate Gaussian distributions for a finite collection of data, are the most popular approach in small-area spatiotemporal statistical modelling. In this context they are used to encode correlation structures over space and time and can generalise well in interpolation tasks. Despite their flexibility, off-the-shelf GPs present serious computational challenges which limit their scalability and practical usefulness in applied settings. Here, we propose a novel, deep generative modelling approach to tackle this challenge: for a particular spatiotemporal setting, we approximate a class of GP priors through prior sampling and subsequent fitting of a variational autoencoder (VAE). Given a trained VAE, the resultant decoder allows spatiotemporal inference to become incredibly efficient due to the low dimensional, independently distributed latent Gaussian space representation of the VAE. Once trained, inference using the VAE decoder replaces the GP within a Bayesian sampling framework. This approach provides tractable and easy-to-implement means of approximately encoding spatiotemporal priors and facilitates efficient statistical inference. We demonstrate the utility of our VAE two stage approach on Bayesian, small-area estimation tasks.
翻译:高斯进程(GPs)通过多种变式的Gaussian分布系统实施,用于有限的数据收集,是小地区零星统计建模中最受欢迎的方法,在这方面,这些流程用于在空间和时间上对相关结构进行编码,并能够广泛推广内插任务。尽管具有灵活性,现成的Gaussian进程(GPs)提出了严重的计算挑战,限制了其在应用环境中的可缩放性和实用性。在这里,我们提议了一种新颖的、深层次的基因建模方法来应对这一挑战:对于特定的波地时环境,我们通过先前的取样和随后的变异自动coder(VAE)的安装来估计一组GPs前科前科。鉴于经过培训后,该方法提供了一种可移植的、容易执行的版本。鉴于经过培训的VAE,结果的脱色器使得由于VAE的低天、独立分布的潜层空间代表而变得极其有效。一旦经过培训,使用VAE解调器取代了Bayesian取样框架内的GPs。这一方法为我们之前的统计应用阶段提供了可感动和实用化工具。