Quantizing deep networks with adaptive bit-widths is a promising technique for efficient inference across many devices and resource constraints. In contrast to static methods that repeat the quantization process and train different models for different constraints, adaptive quantization enables us to flexibly adjust the bit-widths of a single deep network during inference for instant adaptation in different scenarios. While existing research shows encouraging results on common image classification benchmarks, this paper investigates how to train such adaptive networks more effectively. Specifically, we present two novel techniques for quantizing deep neural networks with adaptive bit-widths of weights and activations. First, we propose a collaborative strategy to choose a high-precision teacher for transferring knowledge to the low-precision student while jointly optimizing the model with all bit-widths. Second, to effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network. Extensive experiments on multiple image classification datasets including video classification benchmarks for the first time, well demonstrate the efficacy of our approach over state-of-the-art methods.
翻译:以适应性比特宽的深度网络量化以适应性比特宽的深度网络是在许多装置和资源限制中进行有效推断的一个很有希望的技术。 与重复量化过程和为不同的制约而培训不同模型的静态方法相反, 适应性量化使我们能够灵活调整一个深度网络在不同情况下进行瞬时适应的推断过程中的比特宽。 虽然现有研究显示共同图像分类基准方面令人鼓舞的结果, 本文调查如何更有效地培训这种适应性网络。 具体地说, 我们提出了两种新颖技术, 用于对具有适应性比特重量和激活的深神经网络进行量化。 首先, 我们提出了一个合作战略, 选择一个高精度的师资, 将知识转让给低精度学生, 同时用所有比特宽的方法共同优化模型。 其次, 为了有效地转移知识, 我们开发一种动态区块互换方法, 随机替换低精度学生网络中的区块, 并替换高精度教师网络中的相应区块。 在多个图像分类数据设置上进行广泛的实验, 包括第一次的视频分类方法的功效。