The increasing spread of artificial neural networks does not stop at ultralow-power edge devices. However, these very often have high computational demand and require specialized hardware accelerators to ensure the design meets power and performance constraints. The manual optimization of neural networks along with the corresponding hardware accelerators can be very challenging. This paper presents HANNAH (Hardware Accelerator and Neural Network seArcH), a framework for automated and combined hardware/software co-design of deep neural networks and hardware accelerators for resource and power-constrained edge devices. The optimization approach uses an evolution-based search algorithm, a neural network template technique, and analytical KPI models for the configurable UltraTrail hardware accelerator template to find an optimized neural network and accelerator configuration. We demonstrate that HANNAH can find suitable neural networks with minimized power consumption and high accuracy for different audio classification tasks such as single-class wake word detection, multi-class keyword detection, and voice activity detection, which are superior to the related work.
翻译:人工神经网络的日益扩大并不止于超低功率边缘装置,然而,这些装置往往具有很高的计算需求,需要专门的硬件加速器来确保设计符合功率和性能限制。人工优化神经网络以及相应的硬件加速器可能非常具有挑战性。本文介绍了HANNAHAH(硬件加速器和神经网络 SeArcH),这是一个用于为资源和受电限制的边缘装置自动和组合硬件/软件联合设计硬件/软件共同设计的框架。优化方法使用了基于进化的搜索算法、神经网络模板技术以及用于可配置超轨硬件加速器模板的分析 KPI 模型, 以找到最佳的神经网络和加速器配置。我们证明HANNAHAH能够找到合适的神经网络,其电能消耗最小化和高精度用于不同的音频分类任务,例如单级唤醒字探测、多级关键词探测和语音活动探测,这比相关工作要优越。