Gaussian processes have become a promising tool for various safety-critical settings, since the posterior variance can be used to directly estimate the model error and quantify risk. However, state-of-the-art techniques for safety-critical settings hinge on the assumption that the kernel hyperparameters are known, which does not apply in general. To mitigate this, we introduce robust Gaussian process uniform error bounds in settings with unknown hyperparameters. Our approach computes a confidence region in the space of hyperparameters, which enables us to obtain a probabilistic upper bound for the model error of a Gaussian process with arbitrary hyperparameters. We do not require to know any bounds for the hyperparameters a priori, which is an assumption commonly found in related work. Instead, we are able to derive bounds from data in an intuitive fashion. We additionally employ the proposed technique to derive performance guarantees for a class of learning-based control problems. Experiments show that the bound performs significantly better than vanilla and fully Bayesian Gaussian processes.
翻译:Gausian 进程已成为各种安全关键设置的一个很有希望的工具, 因为后方差异可以被用来直接估计模型错误和量化风险。 但是, 安全关键设置的最新技术取决于以下假设: 内核超参数是已知的, 但一般不适用。 为了缓解这一点, 我们以未知的超光度计在各种环境中引入了强大的Gausian进程统一错误界限。 我们的方法计算出对超光度计空间的信任区域, 从而使我们能够获得高斯进程模型错误的概率上限, 且具有任意的超光度。 我们不需要知道前方超光谱仪的任何界限, 而这是相关工作中常见的假设。 相反, 我们能够以直观的方式从数据中获取界限。 我们还进一步使用拟议的技术来为学习控制问题的类别获取性能保障。 实验显示, 约束比香草和全巴耶斯高斯进程要好得多。