Gaussian Processes (GPs) have proven themselves as a reliable and effective method in probabilistic Machine Learning. Thanks to recent and current advances, modeling complex data with GPs is becoming more and more feasible. Thus, these types of models are, nowadays, an interesting alternative to Neural and Deep Learning methods, which are arguably the current state-of-the-art in Machine Learning. For the latter, we see an increasing interest in so-called explainable approaches - in essence methods that aim to make a Machine Learning model's decision process transparent to humans. Such methods are particularly needed when illogical or biased reasoning can lead to actual disadvantageous consequences for humans. Ideally, explainable Machine Learning should help detect such flaws in a model and aid a subsequent debugging process. One active line of research in Machine Learning explainability are gradient-based methods, which have been successfully applied to complex neural networks. Given that GPs are closed under differentiation, gradient-based explainability for GPs appears as a promising field of research. This paper is primarily focused on explaining GP classifiers via gradients where, contrary to GP regression, derivative GPs are not straightforward to obtain.
翻译:高斯进程(GPs)已证明自己是概率机器学习的一种可靠而有效的方法。由于最近和目前的进步,以GPs模拟复杂数据的做法越来越可行。因此,这些类型的模型如今是神经和深层学习方法的一种有趣的替代方法,可以说这是机器学习中目前最先进的方法。对于后者来说,我们看到对所谓的可解释方法越来越感兴趣,这些方法本质上旨在使机器学习模型的决策过程对人类透明。当不合逻辑或偏颇的推理可能导致对人类的实际不利后果时,特别需要这种方法。理想的是,可解释的机器学习应该有助于在模型中发现这种缺陷,帮助随后的解错过程。机械学习的一个积极研究线是基于梯度的方法,这些方法已经成功地应用于复杂的神经网络。鉴于GPs在差异下是封闭的,因此GPs的梯度解释似乎是一个很有希望的研究领域。本文主要侧重于通过梯度解释GP分类器的分类方法,而与GPGP不是直接获得的。