We present any-precision deep neural networks (DNNs), which are trained with a new method that allows the learned DNNs to be flexible in numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-widths, by truncating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low-bits, we show that the model achieved accuracy comparable to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learning models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures and applicable to multiple vision tasks. Our code is released at https://github.com/SHI-Labs/Any-Precision-DNNs.
翻译:我们展示了任何精密的深神经网络(DNN),这些网络经过了培训,采用了一种新的方法,使学习的DNN在推论期间能够灵活地使用数字精确度。同样的运行时模型可以通过缩短最小的比特点,灵活地直接设定给不同的比特点,以支持动态速度和精确性交换。当所有层次都设定到低位时,我们表明,该模型达到了与经过同样精确度培训的专门模型相当的精确度。这种良好的属性有利于在现实世界应用中灵活地部署深层次学习模型,在现实世界应用中,在实际中经常寻求模型精确度和运行时空效率之间的权衡。以前的文献为在每一个单个固定效率/准确性交易点培训模型提供了解决方案。但是如何在运行时精度方面产生一个灵活的模型是基本上没有探索的。当效率/准确性交易需求随时间而变化时,甚至动态地变化时,我们无法再培训模型,因此,存储预算可能禁止保留多个模型。我们提议的模型框架在不应用性能退化的情况下实现这种灵活性。更重要的是,我们提出的模型和系统化的进度结构会显示,我们是如何在不应用的。