We study posterior contraction rates for a class of deep Gaussian process priors applied to the nonparametric regression problem under a general composition assumption on the regression function. It is shown that the contraction rates can achieve the minimax convergence rate (up to $\log n$ factors), while being adaptive to the underlying structure and smoothness of the target function. The proposed framework extends the Bayesian nonparametric theory for Gaussian process priors.
翻译:我们研究了在回归函数的一般构成假设下适用于非参数回归问题的一类深高斯进程后端收缩率。我们发现,收缩率可以达到最小最大趋同率(最高为$/log n$因子),同时适应目标功能的基本结构和平稳性。拟议框架扩展了戈斯进程前端的巴耶斯非参数理论。