Lipschitz learning is a graph-based semi-supervised learning method where one extends labels from a labeled to an unlabeled data set by solving the infinity Laplace equation on a weighted graph. In this work we prove uniform convergence rates for solutions of the graph infinity Laplace equation as the number of vertices grows to infinity. Their continuum limits are absolutely minimizing Lipschitz extensions with respect to the geodesic metric of the domain where the graph vertices are sampled from. We work under very general assumptions on the graph weights, the set of labeled vertices, and the continuum domain. Our main contribution is that we obtain quantitative convergence rates even for very sparsely connected graphs, as they typically appear in applications like semi-supervised learning. In particular, our framework allows for graph bandwidths down to the connectivity radius. For proving this we first show a quantitative convergence statement for graph distance functions to geodesic distance functions in the continuum. Using the "comparison with distance functions" principle, we can pass these convergence statements to infinity harmonic functions and absolutely minimizing Lipschitz extensions.
翻译:Lipschitz 学习是一种基于图形的半监督的学习方法, 通过在加权图形中解析无尽 Laplace 方程式, 将标签从标签标签扩展至无标签的数据集。 在这项工作中, 当脊椎增长到无尽时, 我们证明无尽 Laplace 方程式的解决方案具有统一的趋同率。 它们的连续限制是绝对最小化的Lipschitz 扩展度, 与图形脊椎取样的域的大地测量度值有关。 我们在图形重量、 标签的脊椎和连续域等非常笼统的假设下开展工作。 我们的主要贡献是, 我们获得数量上的趋同率, 即使是非常稀少的连接的图形, 因为它们通常出现在半超常的学习中 。 我们的框架允许图形带宽度下至连接半径。 为了证明这一点, 我们首先展示了图形距离函数的定量趋同说明, 到连续的地球偏远函数 。 使用“ 与远程函数的比较” 原则, 我们可以将这些趋同的趋同性声明传递到非精确性调函数, 以及绝对缩小利普西茨 扩展 。