We study posterior contraction rates for a class of deep Gaussian process priors applied to the nonparametric regression problem under a general composition assumption on the regression function. It is shown that the contraction rates can achieve the minimax convergence rate (up to $\log n$ factors), while being adaptive to the underlying structure and smoothness of the target function. The proposed framework extends the Bayesian nonparametric theory for Gaussian process priors. We discuss the computational challenges of sampling from the posterior distribution.
翻译:我们根据回归函数的一般构成假设,研究对非参数回归问题适用的一组深高斯进程后方收缩率,并表明收缩率可以达到最小最大趋同率(最高为$/log n$因子),同时适应目标功能的基本结构和平稳性。拟议框架扩展了戈西亚进程前方的巴耶斯非参数理论。我们讨论了从后方分布中取样的计算挑战。