Re-identification (ReID) is to identify the same instance across different cameras. Existing ReID methods mostly utilize alignment-based or attention-based strategies to generate effective feature representations. However, most of these methods only extract general feature by employing single input image itself, overlooking the exploration of relevance between comparing images. To fill this gap, we propose a novel end-to-end trainable dynamic convolution framework named Instance and Pair-Aware Dynamic Networks in this paper. The proposed model is composed of three main branches where a self-guided dynamic branch is constructed to strengthen instance-specific features, focusing on every single image. Furthermore, we also design a mutual-guided dynamic branch to generate pair-aware features for each pair of images to be compared. Extensive experiments are conducted in order to verify the effectiveness of our proposed algorithm. We evaluate our algorithm in several mainstream person and vehicle ReID datasets including CUHK03, DukeMTMCreID, Market-1501, VeRi776 and VehicleID. In some datasets our algorithm outperforms state-of-the-art methods and in others, our algorithm achieves a comparable performance.
翻译:重新定位(ReID) 是指在不同相机中识别同一实例。 现有的再识别方法大多使用基于校准或关注的策略来生成有效的特征展示。 但是,这些方法大多只是通过使用单一输入图像本身来提取一般特征,而忽略了比较图像之间的关联性。 为了填补这一空白,我们提议了一个新的端到端可训练动态共变框架, 名为“ 试镜” 和“ Pair-Aware-Aware 动态网络 ” 。 拟议的模型由三个主要分支组成, 其中自导动态分支的构建是为了加强具体实例的特征, 侧重于每个图像。 此外, 我们还设计了一个相互指导的动态分支, 为每对一对图像的对比生成配对性特征。 为了验证我们拟议的算法的有效性,我们进行了广泛的实验。 我们评估了我们在若干主流人和车辆再识别数据集中的算法, 包括 CUHK03、 DukMCDREID、 1501、 VeRi776 和 ASSIID。 在一些数据设置的算法比对最新方法和其他图像进行对比。