Many edge applications, such as collaborative robotics and spacecraft rendezvous, can benefit from 6D object pose estimation, but must do so on embedded platforms. Unfortunately, existing 6D pose estimation networks are typically too large for deployment in such situations and must therefore be compressed, while maintaining reliable performance. In this work, we present an approach to doing so by quantizing such networks. More precisely, we introduce a module-wise quantization strategy that, in contrast to uniform and mixed-precision quantization, accounts for the modular structure of typical 6D pose estimation frameworks. We demonstrate that uniquely compressing these modules outperforms uniform and mixed-precision quantization techniques. Moreover, our experiments evidence that module-wise quantization can lead to a significant accuracy boost. We showcase the generality of our approach using different datasets, quantization methodologies, and network architectures, including the recent ZebraPose.
翻译:许多边缘应用,如合作机器人和航天器会合,都可以从6D对象构成的估计中受益,但必须在嵌入平台上这样做。 不幸的是,现有的6D构成的估计网络通常过于庞大,无法在这种情况下部署,因此必须压缩,同时保持可靠的性能。在这项工作中,我们提出了一个方法,通过对此类网络进行量化。更确切地说,我们引入了一种模块化的量化战略,与统一和混合精度量化相比,对典型的6D的模块结构构成的估计框架进行核算。我们证明,这些模块的压缩具有独特性,超过了统一和混合精度量化技术。此外,我们的实验证据表明,模块化四分化可导致显著的精度加速。我们展示了我们采用不同数据集、量化方法和网络结构(包括最近的ZebraPose)的方法的通用性。</s>