In much of the literature on function approximation by deep networks, the function is assumed to be defined on some known domain, such as a cube or sphere. In practice, the data might not be dense on these domains, and therefore, the approximation theory results are observed to be too conservative. In manifold learning, one assumes instead that the data is sampled from an unknown manifold; i.e., the manifold is defined by the data itself. Function approximation on this unknown manifold is then a two stage procedure: first, one approximates the Laplace-Beltrami operator (and its eigen-decomposition) on this manifold using a graph Laplacian, and next, approximates the target function using the eigen-functions. In this paper, we propose a more direct approach to function approximation on unknown, data defined manifolds without computing the eigen-decomposition of some operator, and estimate the degree of approximation in terms of the manifold dimension. This leads to similar results in function approximation using deep networks where each channel evaluates a Gaussian network on a possibly unknown manifold.
翻译:在很多关于深网络函数近似值的文献中,该函数被假定在某些已知领域,如立方体或球体上定义。在实践中,数据可能并不密集于这些领域,因此,近似理论结果被观察为过于保守。在多重学习中,人们假设数据抽样来自未知的方块;即,数据本身定义了该方块。然后,关于这一未知方块的功能近似是一个两个阶段程序:首先,使用Laplace-Beltrami操作员(及其eigen-decomposition)接近该方块上的Laplace-Beltrami操作员(及其eigen-decommposition),然后,使用Laplacecian图来近似目标函数,使用eigen-功能。在本文中,我们建议一种更直接的方法,在不计算某些操作员的eigen-decommation的情况下对未知的元件进行近似值运行,并估计多维度的近似度。这导致使用深网络进行功能近似的结果,每个频道在可能未知的多元上评价高斯网络。