Learning precise surrogate models of complex computer simulations and physical machines often require long-lasting or expensive experiments. Furthermore, the modeled physical dependencies exhibit nonlinear and nonstationary behavior. Machine learning methods that are used to produce the surrogate model should therefore address these problems by providing a scheme to keep the number of queries small, e.g. by using active learning and be able to capture the nonlinear and nonstationary properties of the system. One way of modeling the nonstationarity is to induce input-partitioning, a principle that has proven to be advantageous in active learning for Gaussian processes. However, these methods either assume a known partitioning, need to introduce complex sampling schemes or rely on very simple geometries. In this work, we present a simple, yet powerful kernel family that incorporates a partitioning that: i) is learnable via gradient-based methods, ii) uses a geometry that is more flexible than previous ones, while still being applicable in the low data regime. Thus, it provides a good prior for active learning procedures. We empirically demonstrate excellent performance on various active learning tasks.
翻译:学习复杂计算机模拟和物理机器的精确代理模型通常需要长时间或昂贵的实验。此外,建模的物理依赖关系展示出非线性和非平稳性行为。因此,用于生成代理模型的机器学习方法应该通过提供旨在保持查询数量少的方案(例如使用主动学习),并能够捕捉系统的非线性和非平稳性质来解决这些问题。一种建模非平稳性的方法是引入输入分区,该原则已被证明在高斯过程主动学习中具有优势。然而,这些方法要么假设已知分区,需要引入复杂的抽样方案,要么依赖于非常简单的几何形状。在本文中,我们提出了一种简单但强大的核家族,其中包含一个分区,该分区:i)可以通过基于梯度的方法进行学习,ii)使用比以前更灵活的几何形状,同时仍然适用于低数据范围。因此,它为主动学习程序提供了良好的先验。我们在各种主动学习任务上进行了经验证明,具有出色的性能。