An increasing number of emerging applications in data science and engineering are based on multidimensional and structurally rich data. The irregularities, however, of high-dimensional data often compromise the effectiveness of standard machine learning algorithms. We hereby propose the Rank-R Feedforward Neural Network (FNN), a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters, thereby offering two core advantages compared to typical machine learning methods. First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension. Moreover, the number of the model's trainable parameters is substantially reduced, making it very efficient for small sample setting problems. We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets. Experimental evaluations show that Rank-R FNN is a computationally inexpensive alternative of ordinary FNN that achieves state-of-the-art performance on higher-order tensor data.
翻译:数据科学和工程领域越来越多的新兴应用基于多层面和结构丰富的数据,但是,高维数据的不规范之处往往损害标准机器学习算法的效力。我们在此建议采用标准机器学习算法(RK-R Feedforward Neal Network)(FNN),这是一个基于AR-R Feedforward Neal Network (FNN) 的无线非线学习模型,该模型的参数使其参数对其参数进行卡农基/Polyadic分解,从而提供了与典型的机器学习方法相比的两个核心优势。首先,它处理作为多线性阵列的投入,绕过传导的需要,从而可以充分利用每个数据层面的结构信息。此外,该模型的可训练参数数量大大减少,从而对小样本设置问题非常有效。我们建立了RK-R FNNN的普遍近似性和可学习性,我们验证其在现实世界超光谱数据集上的性能。实验性能显示,RNNN是普通FNN(FNN)在计算上廉价的替代方法,可以实现更高阶调高调高压数据的最新性能性能。