Multi-task regression attempts to exploit the task similarity in order to achieve knowledge transfer across related tasks for performance improvement. The application of Gaussian process (GP) in this scenario yields the non-parametric yet informative Bayesian multi-task regression paradigm. Multi-task GP (MTGP) provides not only the prediction mean but also the associated prediction variance to quantify uncertainty, thus gaining popularity in various scenarios. The linear model of coregionalization (LMC) is a well-known MTGP paradigm which exploits the dependency of tasks through linear combination of several independent and diverse GPs. The LMC however suffers from high model complexity and limited model capability when handling complicated multi-task cases. To this end, we develop the neural embedding of coregionalization that transforms the latent GPs into a high-dimensional latent space to induce rich yet diverse behaviors. Furthermore, we use advanced variational inference as well as sparse approximation to devise a tight and compact evidence lower bound (ELBO) for higher quality of scalable model inference. Extensive numerical experiments have been conducted to verify the higher prediction quality and better generalization of our model, named NSVLMC, on various real-world multi-task datasets and the cross-fluid modeling of unsteady fluidized bed.
翻译:多任务回归(MTGP)不仅提供了预测的平均值,而且还提供了相关的预测差异,以量化不确定性,从而在各种情景中日益受到欢迎。共同区域化的线性模型(LMC)是一个众所周知的MTGP模式,它通过若干独立和多样化的GP的线性组合,利用任务的依赖性,提高绩效。但LMC在处理复杂的多任务案例时,其模型复杂程度高,模型能力有限。为此,我们开发了共同区域化的神经嵌入,将潜在的GP转化为高维潜伏空间,以诱导产生丰富而多样的行为。此外,我们使用先进的变异性推理模型和微缩缩缩缩缩,以设计一种紧凑和紧凑的证据,通过若干独立和多样化的GPOP的线性组合来提高任务的依赖性。在处理复杂多任务案例时,LMMC进程却受到高度模型的复杂性和有限的模型能力。我们进行了广泛的数字实验,以核实高水平的预测质量和高流动性的NSMMM(M)多级模型。