In application areas where data generation is expensive, Gaussian processes are a preferred supervised learning model due to their high data-efficiency. Particularly in model-based control, Gaussian processes allow the derivation of performance guarantees using probabilistic model error bounds. To make these approaches applicable in practice, two open challenges must be solved i) Existing error bounds rely on prior knowledge, which might not be available for many real-world tasks. (ii) The relationship between training data and the posterior variance, which mainly drives the error bound, is not well understood and prevents the asymptotic analysis. This article addresses these issues by presenting a novel uniform error bound using Lipschitz continuity and an analysis of the posterior variance function for a large class of kernels. Additionally, we show how these results can be used to guarantee safe control of an unknown dynamical system and provide numerical illustration examples.
翻译:在数据生成费用昂贵的应用领域,Gaussian流程因其高数据效率高而成为首选的受监督的学习模式。特别是在基于模型的控制方面,Gaussian流程允许使用概率模型误差界限衍生绩效保障。为使这些方法在实践中适用,必须解决两个公开的挑战。 现有的误差界限依赖于先前的知识,而许多现实世界的任务可能无法利用这些知识。 (二) 培训数据与后方差异之间的关系,主要是造成误差的,没有很好地理解,妨碍了无药可治的分析。这一条通过使用Lipschitz的连续性和对大类内核的后方差异函数的分析,提出新的统一错误来处理这些问题。此外,我们展示了如何利用这些结果来保证安全控制未知的动态系统并提供数字示例。