In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularisation. Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser. The methodology is readily adapted to graphs and here we consider graph poly-Laplacian regularisation in a fully supervised, non-parametric, noise corrupted, regression problem. In particular, given a dataset $\{x_i\}_{i=1}^n$ and a set of noisy labels $\{y_i\}_{i=1}^n\subset\mathbb{R}$ we let $u_n:\{x_i\}_{i=1}^n\to\mathbb{R}$ be the minimiser of an energy which consists of a data fidelity term and an appropriately scaled graph poly-Laplacian term. When $y_i = g(x_i)+\xi_i$, for iid noise $\xi_i$, and using the geometric random graph, we identify (with high probability) the rate of convergence of $u_n$ to $g$ in the large data limit $n\to\infty$. Furthermore, our rate, up to logarithms, coincides with the known rate of convergence in the usual smoothing spline model.
翻译:在(特殊)平滑的样板问题中, 人们会考虑一个差异性的问题, 包括四方数据忠实性罚款和拉帕拉西亚常规化。 通过用聚拉帕拉西亚常规化器替换拉巴拉西亚常规化器, 方法可以很容易地适应图表。 我们在这里会考虑图形多巴拉卡常规化, 是一个完全监督的、 非参数的、 噪音腐败的、 回归性的问题。 特别是考虑到一个数据集 $_x_ i ⁇ i=1 ⁇ % i=1 ⁇ n\ subset\ mathb{R} 和一套吵闹标签 $_ i_ i=1\ n\ subset\ mathb{R} 。 我们让 $_n:\ x_ i=1\ n\\\ t\ mathb{R} 来取代拉帕拉帕卡的常规化器。 我们用一个能量的最小化器, 包含数据忠实性术语和适当缩化的图形 IP术语。 当 i = g(x_i) x_ xxx_x_ i_ i_ i, i, $, i, $, 美元, i, i, 和 lax lax colvern lax rent log rbild) lax colate laxxxxxx