Machine Learning (ML) functions are becoming ubiquitous in latency- and privacy-sensitive IoT applications, prompting for a shift toward near-sensor processing at the extreme edge and the consequent increasing adoption of Parallel Ultra-Low Power (PULP) IoT processors. These compute- and memory-constrained parallel architectures need to run efficiently a wide range of algorithms, including key Non-Neural ML kernels that compete favorably with Deep Neural Networks (DNNs) in terms of accuracy under severe resource constraints. In this paper, we focus on enabling efficient parallel execution of Non-Neural ML algorithms on two RISCV-based PULP platforms, namely GAP8, a commercial chip, and PULP-OPEN, a research platform running on an FPGA emulator. We optimized the parallel algorithms through a fine-grained analysis and intensive optimization to maximize the speedup, considering two alternative Floating-Point (FP) emulation libraries on GAP8 and the native FPU support on PULP-OPEN. Experimental results show that a target-optimized emulation library can lead to an average 1.61x runtime improvement compared to a standard emulation library, while the native FPU support reaches up to 32.09x. In terms of parallel speedup, our design improves the sequential execution by 7.04x on average on the targeted octa-core platforms. Lastly, we present a comparison with the ARM Cortex-M4 microcontroller (MCU), a widely adopted commercial solution for edge deployments, which is 12.87$x slower than PULP-OPEN.
翻译:机器学习( ML) 功能在长期和隐私敏感的 IOT 应用程序中正在变得无处不在, 促使在极端边缘向近感官处理转变, 并因此越来越多地采用平行超低功率(PULP) IOT 处理器。 这些计算和记忆限制的平行结构需要高效运行广泛的算法, 包括关键的非神经内核内核, 在严重的资源限制下, 与深神经网络(DNNS) 的精确性竞争。 在本文中, 我们侧重于在两个基于RISCV 的 PULP平台, 即 GAP8、 一个商业芯片和 PULP- OPEN (一个运行在 FPGA 模拟器上运行的研究平台) 上, 使平行算法得到优化, 通过精细的分析和密集优化, 以最大速度优化 GAPT( PFP) 模拟 GOL8 和本地 FPUPL 的逻辑 校准运行支持, 通过PUPLO- OLOA 测试显示一个目标性图书馆的升级升级升级升级, 以显示我们的标准升级的图书馆 。