High-dimensional data arises in numerous applications, and the rapidly developing field of geometric deep learning seeks to develop neural network architectures to analyze such data in non-Euclidean domains, such as graphs and manifolds. Recent work by Z. Wang, L. Ruiz, and A. Ribeiro has introduced a method for constructing manifold neural networks using the spectral decomposition of the Laplace Beltrami operator. Moreover, in this work, the authors provide a numerical scheme for implementing such neural networks when the manifold is unknown and one only has access to finitely many sample points. The authors show that this scheme, which relies upon building a data-driven graph, converges to the continuum limit as the number of sample points tends to infinity. Here, we build upon this result by establishing a rate of convergence that depends on the intrinsic dimension of the manifold but is independent of the ambient dimension. We also discuss how the rate of convergence depends on the depth of the network and the number of filters used in each layer.
翻译:高维数据在许多应用中产生,而迅速发展的几何深学习领域寻求开发神经网络结构,以分析非欧洲域的数据,如图表和元体。Z. Wang、L. Ruiz和A. Ribeiro最近利用Laplace Beltrami运营商的光谱分解,采用了构建多神经网络的方法。此外,在这项工作中,作者提供了在各种元体未知,而且只有一个有限的抽样点时实施这种神经网络的数字计划。作者们表明,这个依靠建立数据驱动的图的图案,随着抽样点的数量趋于无限化,这种图案会与连续性限制相趋同。在这里,我们以这一结果为基础,确定了一个取决于该元的内在维度但独立于环境维度的趋同率。我们还讨论了汇率如何取决于网络的深度和每个层使用的过滤器的数量。