A recent paper (Neural Networks, {\bf 132} (2020), 253-268) introduces a straightforward and simple kernel based approximation for manifold learning that does not require the knowledge of anything about the manifold, except for its dimension. In this paper, we examine how the pointwise error in approximation using least squares optimization based on similarly localized kernels depends upon the data characteristics and deteriorates as one goes away from the training data. The theory is presented with an abstract localized kernel, which can utilize any prior knowledge about the data being located on an unknown sub-manifold of a known manifold. We demonstrate the performance of our approach using a publicly available micro-Doppler data set, and investigate the use of different preprocessing measures, kernels, and manifold dimensions. Specifically, it is shown that the localized kernel introduced in the above mentioned paper when used with PCA components leads to a near-competitive performance to deep neural networks, and offers significant improvements in training speed and memory requirements. To demonstrate the fact that our methods are agnostic to the domain knowledge, we examine the classification problem in a simple video data set.
翻译:最近的一份论文(Neural Networks, {bf 132}(2020), 253-268) 提出了一个简单、简单、以内核为基础的多元学习近似,除了其维度外,不需要了解任何关于元体的知识。 在本文中,我们研究了使用基于类似本地化内核的最小正方形优化的近似偏差如何取决于数据特性,当一个人离开培训数据时会恶化。 理论用一个抽象的本地化内核来展示, 它可以利用任何先前对已知多元体未知次层上的数据的了解。 我们用一个公开的微多普勒数据集来展示我们的方法的性能, 并调查不同预处理措施、 内核和多维度的使用情况。 具体地说,, 上面提到的纸中的本地化内核内核在使用五氯苯组件时导致深度神经网络的近竞争性表现, 并大大改进了培训速度和记忆要求。 为了证明我们的方法对域知识具有敏感性的事实, 我们用一个简单的视频数据集来研究分类问题。