Autoencoders have demonstrated remarkable success in learning low-dimensional latent features of high-dimensional data across various applications. Assuming that data are sampled near a low-dimensional manifold, we employ chart autoencoders, which encode data into low-dimensional latent features on a collection of charts, preserving the topology and geometry of the data manifold. Our paper establishes statistical guarantees on the generalization error of chart autoencoders, and we demonstrate their denoising capabilities by considering $n$ noisy training samples, along with their noise-free counterparts, on a $d$-dimensional manifold. By training autoencoders, we show that chart autoencoders can effectively denoise the input data with normal noise. We prove that, under proper network architectures, chart autoencoders achieve a squared generalization error in the order of $\displaystyle n^{-\frac{2}{d+2}}\log^4 n$, which depends on the intrinsic dimension of the manifold and only weakly depends on the ambient dimension and noise level. We further extend our theory on data with noise containing both normal and tangential components, where chart autoencoders still exhibit a denoising effect for the normal component. As a special case, our theory also applies to classical autoencoders, as long as the data manifold has a global parametrization. Our results provide a solid theoretical foundation for the effectiveness of autoencoders, which is further validated through several numerical experiments.
翻译:自编码器在各种应用场景中已经表现出了学习高维数据低维潜在特征的非凡成功。假设数据在低维流形附近采样,我们采用图表自编码器,将数据编码成一组图表上的低维潜在特征,保留数据流形的拓扑和几何结构。我们的论文对图表自编码器的广义误差进行了统计保障,并通过考虑 $n$ 个噪声训练样本及其无噪声的对应样本在 $d$ 维流形上进行去噪验证其鲁棒性。通过训练自编码器,我们证明了图表自编码器可以有效地处理具有正态噪声的输入数据。我们证明,在适当的网络架构下,图表自编码器实现的平方广义误差在 $\displaystyle n^{-\frac{2}{d+2}}\log^4 n$ 的阶次,这取决于流形的内在维度,但与环境维度和噪音水平关系较弱。我们进一步将理论推广到包含正切和法向分量噪声的数据上,此时图表自编码器仍然表现出法向分量的去噪效果。作为一种特殊情况,我们的理论也适用于经典自编码器,只要数据流形具有全局参数化。我们的结果为自编码器的有效性提供了坚实的理论基础,并通过多个数值实验进一步验证了其有效性。