Dimensionality reduction (DR) and manifold learning (ManL) have been applied extensively in many machine learning tasks, including signal processing, speech recognition, and neuroinformatics. However, the understanding of whether DR and ManL models can generate valid learning results remains unclear. In this work, we investigate the validity of learning results of some widely used DR and ManL methods through the chart mapping function of a manifold. We identify a fundamental problem of these methods: the mapping functions induced by these methods violate the basic settings of manifolds, and hence they are not learning manifold in the mathematical sense. To address this problem, we provide a provably correct algorithm called fixed points Laplacian mapping (FPLM), that has the geometric guarantee to find a valid manifold representation (up to a homeomorphism). Combining one additional condition(orientation preserving), we discuss a sufficient condition for an algorithm to be bijective for any d-simplex decomposition result on a d-manifold. However, constructing such a mapping function and its computational method satisfying these conditions is still an open problem in mathematics.
翻译:在许多机器学习任务中,包括信号处理、语音识别和神经信息学等,广泛应用了减少尺寸和多重学习(ManL)的方法。然而,对DR和ManL模型能否产生有效的学习结果的理解仍然不明确。在这项工作中,我们调查了某些广泛使用的DR和ManL方法的学习结果的有效性,方法是通过一个多功能的图表绘图功能。我们确定了这些方法的一个根本问题:这些方法引起的绘图功能违反了多功能的基本设置,因此它们没有从数学意义上学习多重。为了解决这个问题,我们提供了一种可辨别正确的算法,称为固定点 Laplacian 映射(FPLM),这种算法具有几何学保证,可以找到有效的多重代表(直至自貌主义)。结合了一种附加条件(方向保护),我们讨论一个充分的条件,使算法能够使任何d-smox解剖结果在 d-manyfide。然而,建立这样的绘图功能及其计算方法在数学上仍是一个开放的问题。