Quantization using a small number of bits shows promise for reducing latency and memory usage in deep neural networks. However, most quantization methods cannot readily handle complicated functions such as exponential and square root, and prior approaches involve complex training processes that must interact with floating-point values. This paper proposes a robust method for the full integer quantization of vision transformer networks without requiring any intermediate floating-point computations. The quantization techniques can be applied in various hardware or software implementations, including processor/memory architectures and FPGAs.
翻译:量化方法可以显著减少深度神经网络的延迟和内存使用。然而,大多数量化方法不能轻松处理指数和平方根等复杂函数,并且先前的方法涉及到复杂的训练过程,需要与浮点值进行交互。本文提出了一种鲁棒的量化方法,用于对 Vision Transformer 网络进行完整的整数量化,不需要任何中间的浮点计算。这种量化技术可以在各种硬件或软件实现中应用,包括处理器/内存架构和 FPGA。