The paper has two major themes. The first part of the paper establishes certain general results for infinite-dimensional optimization problems on Hilbert spaces. These results cover the classical representer theorem and many of its variants as special cases and offer a wider scope of applications. The second part of the paper then develops a systematic approach for learning the drift function of a stochastic differential equation by integrating the results of the first part with Bayesian hierarchical framework. Importantly, our Baysian approach incorporates low-cost sparse learning through proper use of shrinkage priors while allowing proper quantification of uncertainty through posterior distributions. Several examples at the end illustrate the accuracy of our learning scheme.
翻译:论文有两大主题:文件第一部分为Hilbert空间的无限维度优化问题规定了某些一般结果,这些结果包括古典代表理论及其许多变体,作为特例,并提供了更广泛的应用范围;文件第二部分则发展了系统的方法,通过将第一部分的结果与巴耶斯等级框架结合起来,学习随机差异方程式的漂移功能。重要的是,我们的Baysian方法通过适当使用缩略图,吸收了低成本的零散学习,同时允许通过后方分布对不确定性进行适当量化。最后的一些例子说明了我们学习计划的准确性。