The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this article, we focus on resource-efficient randomly connected neural networks known as Random Vector Functional Link (RVFL) networks since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. We propose to represent input features via the density-based encoding known in the area of stochastic computing and use the operations of binding and bundling from the area of hyperdimensional computing for obtaining the activations of the hidden neurons. Using a collection of 121 real-world datasets from the UCI Machine Learning Repository, we empirically show that the proposed approach demonstrates higher average accuracy than the conventional RVFL. We also demonstrate that it is possible to represent the readout matrix using only integers in a limited range with minimal loss in the accuracy. In this case, the proposed approach operates only on small n-bits integers, which results in a computationally efficient architecture. Finally, through hardware Field-Programmable Gate Array (FPGA) implementations, we show that such an approach consumes approximately eleven times less energy than that of the conventional RVFL.
翻译:在受资源限制的边缘设备上部署机算算法,从理论和应用角度都是一个重大挑战。在本条中,我们侧重于资源效率高的随机随机连接神经网络,称为随机矢量功能链接(RVFL)网络,因为其简单的设计和极其快速的培训时间使这些网络对解决许多应用的分类任务非常有吸引力。我们提议通过在随机计算领域已知的基于密度的编码来代表输入特性,并使用从超维计算领域捆绑和捆绑的操作来激活隐藏的神经元。最后,我们利用从UCI机器学习存储库收集的121个真实世界数据集,我们从经验上表明,拟议方法显示的平均精度高于常规的RVFLL。我们还表明,仅使用有限范围的整数来代表读出矩阵是可能的,而准确性损失最小。在本案中,拟议方法仅对小正位整数进行操作,从而产生计算效率高的结构。最后,我们通过硬件的野基可控门Array(FLA)实施系统,我们用这种常规方法显示比常规的能源消耗量大约低11倍。