Mixed-precision Deep Neural Networks achieve the energy efficiency and throughput needed for hardware deployment, particularly when the resources are limited, without sacrificing accuracy. However, the optimal per-layer bit precision that preserves accuracy is not easily found, especially with the abundance of models, datasets, and quantization techniques that creates an enormous search space. In order to tackle this difficulty, a body of literature has emerged recently, and several frameworks that achieved promising accuracy results have been proposed. In this paper, we start by summarizing the quantization techniques used generally in literature. Then, we present a thorough survey of the mixed-precision frameworks, categorized according to their optimization techniques such as reinforcement learning and quantization techniques like deterministic rounding. Furthermore, the advantages and shortcomings of each framework are discussed, where we present a juxtaposition. We finally give guidelines for future mixed-precision frameworks.
翻译:混合精密深神经网络实现了硬件部署所需的能源效率和吞吐量,特别是在资源有限的情况下,但又不牺牲准确性。然而,很难找到保持准确性的最佳的每层精确度,特别是在大量模型、数据集和创造巨大搜索空间的量化技术方面。为了解决这一困难,最近出现了一大批文献,并提出了若干取得有希望准确性结果的框架。在本文件中,我们首先总结了文献中普遍使用的定量化技术。然后,我们提出了对混合精度框架的彻底调查,根据这些框架的优化技术进行分类,例如强化学习和量化技术,例如确定性四舍五入。此外,讨论了每个框架的优点和缺点,我们在这里提出一个对立。我们最后为未来的混合精度框架提供了指导方针。