Embedding Artificial Intelligence onto low-power devices is a challenging task that has been partly overcome with recent advances in machine learning and hardware design. Presently, deep neural networks can be deployed on embedded targets to perform different tasks such as speech recognition,object detection or Human Activity Recognition. However, there is still room for optimization of deep neural networks onto embedded devices. These optimizations mainly address power consumption,memory and real-time constraints, but also an easier deployment at the edge. Moreover, there is still a need for a better understanding of what can be achieved for different use cases. This work focuses on quantization and deployment of deep neural networks onto low-power 32-bit microcontrollers. The quantization methods, relevant in the context of an embedded execution onto a microcontroller, are first outlined. Then, a new framework for end-to-end deep neural networks training, quantization and deployment is presented. This framework, called MicroAI, is designed as an alternative to existing inference engines (TensorFlow Lite for Microcontrollers and STM32CubeAI). Our framework can indeed be easily adjusted and/or extended for specific use cases. Execution using single precision 32-bit floating-point as well as fixed-point on 8- and 16-bit integers are supported. The proposed quantization method is evaluated with three different datasets (UCI-HAR, Spoken MNIST and GTSRB). Finally, a comparison study between MicroAI and both existing embedded inference engines is provided in terms of memory and power efficiency. On-device evaluation is done using ARM Cortex-M4F-based microcontrollers (Ambiq Apollo3 and STM32L452RE).
翻译:在低功率装置上嵌入人工智能是一项具有挑战性的任务,随着机器学习和硬件设计的最新进展,这项工作已经部分地克服了这项具有挑战性的任务。目前,深神经网络可以部署在嵌入目标上,以完成语音识别、弹道检测或人类活动识别等不同任务。然而,在嵌入装置上仍然有优化深神经网络的空间。这些优化主要针对电耗、模拟和实时限制,但也较容易在边缘部署。此外,还需要更好地了解不同使用案例可以实现什么。这项工作侧重于将深神经网络配置到低功率32比微控制器上。目前,深神经网络可以部署在嵌入目标上,例如语音识别、弹道检测或人类活动识别。然而,目前还存在一个用于端至端深神经网络培训、模拟和实时约束的新框架。这个称为MicroAI的框架,是对现有导力引擎(Microcontrol LitiveLite)和STM32CUBAI之间提供的高级神经网络网络网络网络的量化和部署。我们的框架,在使用Srmal-ral-ral-ral-al-al-lader-ladal-lade A(在S-ral-lader-lader-lader-lader-lader-lader-lader-lad-lad-lad-lad-lad-lad-S-S-lad-lad-S-S-lad-lad-s-s-s-s-s-s-s-s-S-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-lad-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-lader-s-s-s-s-s-s-s-s-s-s-