Approximate computing methods have shown great potential for deep learning. Due to the reduced hardware costs, these methods are especially suitable for inference tasks on battery-operated devices that are constrained by their power budget. However, approximate computing hasn't reached its full potential due to the lack of work on training methods. In this work, we discuss training methods for approximate hardware. We demonstrate how training needs to be specialized for approximate hardware, and propose methods to speed up the training process by up to 18X.
翻译:由于降低硬件成本,近似计算方法对深度学习表现出了巨大的潜力。由于电力预算的限制,这些方法特别适合在电池操作的设备上进行推理任务。然而,近似计算尚未充分发挥其潜力,因为缺乏针对训练方法的研究。在这项工作中,我们讨论了近似硬件的训练方法。我们演示了如何为近似硬件专门设计训练,并提出了可将训练过程加速高达18倍的方法。