Research into the task of re-identification (ReID) is picking up momentum in computer vision for its many use cases and zero-shot learning nature. This paper proposes a computationally efficient fine-grained ReID model, FGReID, which is among the first models to unify image and video ReID while keeping the number of training parameters minimal. FGReID takes advantage of video-based pre-training and spatial feature attention to improve performance on both video and image ReID tasks. FGReID achieves state-of-the-art (SOTA) on MARS, iLIDS-VID, and PRID-2011 video person ReID benchmarks. Eliminating temporal pooling yields an image ReID model that surpasses SOTA on CUHK01 and Market1501 image person ReID benchmarks. The FGReID achieves near SOTA performance on the vehicle ReID dataset VeRi as well, demonstrating its ability to generalize. Additionally we do an ablation study analyzing the key features influencing model performance on ReID tasks. Finally, we discuss the moral dilemmas related to ReID tasks, including the potential for misuse. Code for this work is publicly available at https: //github.com/ppriyank/Fine-grained-ReIdentification.
翻译:重新定位(ReID)任务的研究正在为许多使用案例和零光学习性质带来计算机视野动力。本文件建议采用计算效率高的微微重新开发模型(FGREID),这是在尽量减少培训参数的同时统一图像和视频再开发(FGREID)的首批模型之一,同时将培训参数保持在最低水平上。FGREID利用视频前培训和空间特征关注来提高视频和图像再开发任务的业绩。FGREID在MARS、 iLID-VID和 PRID-2011视频人再开发基准方面达到最新水平(SOTA)。最后,我们讨论了与REID任务有关的道德困境,包括CUHK01和Mosec1501图像再开发(Muse1501)人再开发基准超过SOTA。FGREID在车辆再开发数据集(VeRi)上取得了接近SOTA的性能,并展示了其普及能力。此外,我们做了一项相关研究,分析影响REID任务模型业绩的关键特征。最后,我们讨论了与ReID任务有关的道德难题,包括滥用的可能性。