In much of the literature on function approximation by deep networks, the function is assumed to be defined on some known domain, such as a cube or a sphere. In practice, the data might not be dense on these domains, and therefore, the approximation theory results are observed to be too conservative. In manifold learning, one assumes instead that the data is sampled from an unknown manifold; i.e., the manifold is defined by the data itself. Function approximation on this unknown manifold is then a two stage procedure: first, one approximates the Laplace-Beltrami operator (and its eigen-decomposition) on this manifold using a graph Laplacian, and next, approximates the target function using the eigen-functions. Alternatively, one estimates first some atlas on the manifold and then uses local approximation techniques based on the local coordinate charts. In this paper, we propose a more direct approach to function approximation on unknown, data defined manifolds without computing the eigen-decomposition of some operator or an atlas for the manifold, and estimate the degree of approximation. Our constructions are universal; i.e., do not require the knowledge of any prior on the target function other than continuity on the manifold. For smooth functions, the estimates do not suffer from the so-called saturation phenomenon. We demonstrate via a property called good propagation of errors how the results can be lifted for function approximation using deep networks where each channel evaluates a Gaussian network on a possibly unknown manifold.
翻译:在很多关于深网络的函数近似值的文献中,该函数被假定在某些已知的领域,例如立方体或球体上定义。在实践上,数据可能并不密集于这些领域,因此,近似理论结果被观察为过于保守。在多种学习中,人们假设数据样本来自未知的方块;即,数据本身定义了方块。在这个未知的方块上,函数近似于一个未知的域块,例如立方体或球体。在实践上,数据可能并不密集于这个元块上,因此,这些数据可能并不集中在这些域块上,因此,首先估计一些元件上的近近似理论结果,然后根据本地协调图表使用本地近似技术。在本文中,我们建议一种更直接的方法,在对未知的、数据定义的元件进行近似化,而没有计算某个操作者或数解方块图,并估计近似的程度。我们的构造是通用的; i. i. i. i. i. i. i.