Laplace learning is a popular machine learning algorithm for finding missing labels from a small number of labelled feature vectors using the geometry of a graph. More precisely, Laplace learning is based on minimising a graph-Dirichlet energy, equivalently a discrete Sobolev $\Wkp{2}{1}$ semi-norm, constrained to taking the values of known labels on a given subset. The variational problem is asymptotically ill-posed as the number of unlabeled feature vectors goes to infinity for finite given labels due to a lack of regularity in minimisers of the continuum Dirichlet energy in any dimension higher than one. In particular, continuum minimisers are not continuous. One solution is to consider higher-order regularisation, which is the analogue of minimising Sobolev $\Wkp{s}{2}$ semi-norms. In this paper we consider the asymptotics of minimising a graph variant of the Sobolev $\Wkp{s}{2}$ semi-norm with pointwise constraints. We show that, as expected, one needs $s>d/2$ where $d$ is the dimension of the data manifold. We also show that there must be an upper bound on the connectivity of the graph; that is, highly connected graphs lead to degenerate behaviour of the minimiser even when $s>d/2$.
翻译:暂无翻译