Bayesian methods have become a popular way to incorporate prior knowledge and a notion of uncertainty into machine learning models. At the same time, the complexity of modern machine learning makes it challenging to comprehend a model's reasoning process, let alone express specific prior assumptions in a rigorous manner. While primarily interested in the former issue, recent developments intransparent machine learning could also broaden the range of prior information that we can provide to complex Bayesian models. Inspired by the idea of self-explaining models, we introduce a corresponding concept for variational GaussianProcesses. On the one hand, our contribution improves transparency for these types of models. More importantly though, our proposed self-explaining variational posterior distribution allows to incorporate both general prior knowledge about a target function as a whole and prior knowledge about the contribution of individual features.
翻译:贝叶斯方法已成为将先前知识和不确定性概念纳入机器学习模式的流行方式。 与此同时,现代机器学习的复杂性使得理解模型推理过程成为挑战,更不用说严格地表达具体的先前假设了。虽然最近对前一个问题的兴趣很大,但透明机器学习的发展也可以扩大我们向复杂的巴伊斯模式提供的先前信息的范围。根据自我解释模型的想法,我们引入了变异高斯进程的相应概念。一方面,我们的贡献提高了这些类型模型的透明度。更重要的是,我们提议的自我解释变异后遗迹分布既包括了对目标功能的总体认识,又包括了对单个特征的贡献的先知。