On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. Users can benefit from customized AI models without having to transfer the data to the cloud, protecting the privacy. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to low bit-precision and the lack of normalization; (2) the limited hardware resource does not allow full back-propagation. To cope with the optimization difficulty, we propose Quantization-Aware Scaling to calibrate the gradient scales and stabilize 8-bit quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offload the runtime auto-differentiation to compile time. Our framework is the first solution to enable tiny on-device training of convolutional neural networks under 256KB SRAM and 1MB Flash without auxiliary memory, using less than 1/1000 of the memory of PyTorch and TensorFlow while matching the accuracy on tinyML application VWW. Our study enables IoT devices not only to perform inference but also to continuously adapt to new data for on-device lifelong learning. A video demo can be found here: https://youtu.be/XaDCO8YtmBw.
翻译:在线培训使模型能够适应传感器通过微调培训前的模型而收集的新数据。用户可以受益于定制的AI模型,而不必将数据转移到云层,保护隐私。然而,对于拥有微小内存资源的 IOT 设备而言,培训记忆消耗是令人望而却步的。我们提议了一个算法系统共同设计框架,以便仅用256KB的内存进行在线培训。在线培训面临两个独特的挑战:(1) 神经网络的四分化图由于低位精度和缺乏正常化而难以优化;(2) 有限的硬件资源不能让完全的反向调整。为了应对优化困难,我们建议对具有微小内存资源的 IOT 进行量化-Award 调整,以校准梯度缩略微缩缩略图,我们建议“Sprit-Award”更新时,只能通过一个轻度培训系统进行微量度的升级,但只能将缩略图转换成“Timy Tradeal tar” 。