Machine learning models can be improved by adapting them to respect existing background knowledge. In this paper we consider multitask Gaussian processes, with background knowledge in the form of constraints that require a specific sum of the outputs to be constant. This is achieved by conditioning the prior distribution on the constraint fulfillment. The approach allows for both linear and nonlinear constraints. We demonstrate that the constraints are fulfilled with high precision and that the construction can improve the overall prediction accuracy as compared to the standard Gaussian process.
翻译:机器学习模式可以通过使其适应现有背景知识而加以改进。 在本文中,我们认为多任务高斯进程,其背景知识的形式是需要一定总产出的制约,这种背景知识需要固定不变。这是通过将先前的分布以实现制约为条件实现的。这种方法既允许线性限制,也允许非线性限制。我们证明,这些制约得到了高度精确的履行,并且与标准高斯进程相比,构建工程可以提高总体预测准确性。