Gaussian processes (GPs) are ubiquitously used in sciences and engineering as metamodels. Standard GPs, however, can only handle numerical or quantitative variables. In this paper, we introduce latent map Gaussian processes (LMGPs) that inherit the attractive properties of GPs but are also applicable to mixed data that have both quantitative and qualitative inputs. The core idea behind LMGPs is to learn a low-dimensional manifold where all qualitative inputs are represented by some quantitative features. To learn this manifold, we first assign a unique prior vector representation to each combination of qualitative inputs. We then use a linear map to project these priors on a manifold that characterizes the posterior representations. As the posteriors are quantitative, they can be straightforwardly used in any standard correlation function such as the Gaussian. Hence, the optimal map and the corresponding manifold can be efficiently learned by maximizing the Gaussian likelihood function. Through a wide range of analytical and real-world examples, we demonstrate the advantages of LMGPs over state-of-the-art methods in terms of accuracy and versatility. In particular, we show that LMGPs can handle variable-length inputs and provide insights into how qualitative inputs affect the response or interact with each other. We also provide a neural network interpretation of LMGPs and study the effect of prior latent representations on their performance.
翻译:高斯进程(GPs)作为元模在科学和工程中普遍使用。标准GPs(GPs)只能处理数字或数量变量。在本文件中,我们引入了继承GP有吸引力的特性,但也适用于具有定量和定性投入的混合数据的潜在高斯进程(LMGPs)。LMGP的核心思想是学习一个低维的方程式,其中所有质量投入都以一些定量特征为代表。为了学习这个方块,我们首先为每种质量投入组合指定一个独特的先前矢量代表。我们然后用直线地图将这些前端显示在作为后方代表特征的多个方块上。由于后方是量化的,它们也可以直接用于任何标准的关联功能,如高斯进程。因此,最佳的地图和对应的方块可以通过最大限度地提高高斯可能性功能来有效地学习。通过广泛的分析和现实世界实例,我们展示了LMGPs在准确性和多面面面图的方位化方法上的优势。我们用直径图来预测这些前方位化的方位化方法在精确度和多面面面面图中,我们还能够对前方位的图像进行不同的分析。我们如何分析,我们可以向前方位分析,我们如何分析,我们如何分析。我们如何分析。我们如何分析。我们如何分析。我们如何分析前方位分析前方位化地分析。我们如何在前方位的图像对前方位分析,如何分析。我们方位化网络的深度分析。