Embedded and personal IoT devices are powered by microcontroller units (MCUs), whose extreme resource scarcity is a major obstacle for applications relying on on-device deep learning inference. Orders of magnitude less storage, memory and computational capacity, compared to what is typically required to execute neural networks, impose strict structural constraints on the network architecture and call for specialist model compression methodology. In this work, we present a differentiable structured network pruning method for convolutional neural networks, which integrates a model's MCU-specific resource usage and parameter importance feedback to obtain highly compressed yet accurate classification models. Our methodology (a) improves key resource usage of models up to 80x; (b) prunes iteratively while a model is trained, resulting in little to no overhead or even improved training time; (c) produces compressed models with matching or improved resource usage up to 1.7x in less time compared to prior MCU-specific methods. Compressed models are available for download.
翻译:微型控制器(MCUs)为嵌入式和个人的IoT装置提供动力,这些装置极缺乏资源,是依靠深深学习推导进行应用的主要障碍。与执行神经网络通常需要的相比,数量小于存储、内存和计算能力,对网络结构施加严格的结构性限制,并要求采用专家模型压缩方法。在这项工作中,我们为进化神经网络提供了一种不同的结构化网络运行方法,该方法结合了模型的MCU特定资源使用情况和参数重要性反馈,以获得高度压缩但准确的分类模型。我们的方法(a)改进模型的关键资源使用,最多达80x;(b)在模型培训期间进行迭接,几乎没有间接费用,甚至改进了培训时间;(c)制作压缩模型,与以往的MCU特定方法相比,在较少的时间里匹配或改进资源使用达到1.7x。有压缩模型可供下载。