Skew-Gaussian processes (SkewGPs) extend the multivariate Unified Skew-Normal distributions over finite dimensional vectors to distribution over functions. SkewGPs are more general and flexible than Gaussian processes, as SkewGPs may also represent asymmetric distributions. In a recent contribution we showed that SkewGP and probit likelihood are conjugate, which allows us to compute the exact posterior for non-parametric binary classification and preference learning. In this paper, we generalize previous results and we prove that SkewGP is conjugate with both the normal and affine probit likelihood, and more in general, with their product. This allows us to (i) handle classification, preference, numeric and ordinal regression, and mixed problems in a unified framework; (ii) derive closed-form expression for the corresponding posterior distributions. We show empirically that the proposed framework based on SkewGP provides better performance than Gaussian processes in active learning and Bayesian (constrained) optimization.
翻译:Skew-Gausian 进程( SkewGPs) 扩展了对有限维向矢量的多变量统一 Skew- nomal 分布到函数的分布。 SkewGPs比Gaussian 进程更为一般和灵活,因为SkewGPs也可能代表不对称分布。 在最近的一份材料中,我们显示SkewGP和 probitibs 可能性是共和的, 使我们能够计算非参数二进制分类和偏好学习的确切后继体。 在本文中, 我们概括了先前的结果, 并证明SkewGP与正常和折线性活性可能性相近, 更一般地说, 与它们的产品相近。 这使得我们( 一) 在统一的框架内处理分类、 偏好、 数字 和 圆形回归以及混合问题;(二) 为相应的远端分布产生闭式表达方式。 我们从经验中看到, 以 SkeewGP 为基础的拟议框架在积极学习和巴耶斯( 受限制) 优化中比高斯 进程提供更好的性。