Laplace learning is a popular machine learning algorithm for finding missing labels from a small number of labelled feature vectors using the geometry of a graph. More precisely, Laplace learning is based on minimising a graph-Dirichlet energy, equivalently a discrete Sobolev $\mathrm{H}^1$ semi-norm, constrained to taking the values of known labels on a given subset. The variational problem is asymptotically ill-posed as the number of unlabeled feature vectors goes to infinity for finite given labels due to a lack of regularity in minimisers of the continuum Dirichlet energy in any dimension higher than one. In particular, continuum minimisers are not continuous. One solution is to consider higher-order regularisation, which is the analogue of minimising Sobolev $\mathrm{H}^s$ semi-norms. In this paper we consider the asymptotics of minimising a graph variant of the Sobolev $\mathrm{H}^s$ semi-norm with pointwise constraints. We show that, as expected, one needs $s>d/2$ where $d$ is the dimension of the data manifold. We also show that there must be a upper bound on the connectivity of the graph; that is, highly connected graphs lead to degenerate behaviour of the minimiser even when $s>d/2$.
翻译:Laplace 学习是一种常用的机器学习算法, 用于使用图形的几何测量方法从少数标签特性矢量中寻找缺失的标签。 更准确地说, Laplace 学习基于最小化图形- dirichlet 能量, 相当于离散的 Sobolelev $\mathrm{H<unk> 1$1$半诺尔姆, 限于在给定子上使用已知标签值的类比值。 变异性问题是暂时性的, 因为没有标签特性矢量的矢量数量由于连续的 Dirichlet 能量的稀释剂缺乏规律性而变得无限化。 具体地说, 连续的最小化能量不是连续的。 一个解决方案是考虑更高排序的常规化, 这是将已知的标签标数在给给给给给定子子子中的值值值值值值值值值值值。 在本文中, 最小值值值值值值值值值值值值值值值半的定值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值值中, 和值中值的值中值中值中值值值值值值的值值值值值的值的值值值值值值值值值值值值值值值值值值值值值值值值, 。</s>