Deep Neural Networks (DNNs) are being heavily utilized in modern applications and are putting energy-constraint devices to the test. To bypass high energy consumption issues, approximate computing has been employed in DNN accelerators to balance out the accuracy-energy reduction trade-off. However, the approximation-induced accuracy loss can be very high and drastically degrade the performance of the DNN. Therefore, there is a need for a fine-grain mechanism that would assign specific DNN operations to approximation in order to maintain acceptable DNN accuracy, while also achieving low energy consumption. In this paper, we present an automated framework for weight-to-approximation mapping enabling formal property exploration for approximate DNN accelerators. At the MAC unit level, our experimental evaluation surpassed already energy-efficient mappings by more than $\times2$ in terms of energy gains, while also supporting significantly more fine-grain control over the introduced approximation.
翻译:深神经网络(DNN)正大量用于现代应用,并正在对节能装置进行测试。为了绕过高能源消耗问题,在DNN加速器中采用了近似计算法,以平衡准确性能源减少的权衡。然而,近似导致的精确性损失可能非常高,并会大大降低DNN的性能。因此,需要有一个细微的草率机制,指定具体的DNN操作进行近似,以保持可接受的DNN准确性,同时实现低能消耗。在本文中,我们提出了一个重到近似DNN加速器的自动测算框架,以便于对近似DNN的加速器进行正式的财产勘探。在MAC单位一级,我们的实验性评估在能源收益方面超过了已经节能的测算,但同时也支持对引入的近似性测算法进行大幅度的微小的细度控制。