Nowadays, real data in person re-identification (ReID) task is facing privacy issues, e.g., the banned dataset DukeMTMC-ReID. Thus it becomes much harder to collect real data for ReID task. Meanwhile, the labor cost of labeling ReID data is still very high and further hinders the development of the ReID research. Therefore, many methods turn to generate synthetic images for ReID algorithms as alternatives instead of real images. However, there is an inevitable domain gap between synthetic and real images. In previous methods, the generation process is based on virtual scenes, and their synthetic training data can not be changed according to different target real scenes automatically. To handle this problem, we propose a novel Target-Aware Generation pipeline to produce synthetic person images, called TAGPerson. Specifically, it involves a parameterized rendering method, where the parameters are controllable and can be adjusted according to target scenes. In TAGPerson, we extract information from target scenes and use them to control our parameterized rendering process to generate target-aware synthetic images, which would hold a smaller gap to the real images in the target domain. In our experiments, our target-aware synthetic images can achieve a much higher performance than the generalized synthetic images on MSMT17, i.e. 47.5% vs. 40.9% for rank-1 accuracy. We will release this toolkit\footnote{\noindent Code is available at \href{https://github.com/tagperson/tagperson-blender}{https://github.com/tagperson/tagperson-blender}} for the ReID community to generate synthetic images at any desired taste.
翻译:目前,个人再识别(ReID)任务中的真实数据正面临隐私问题,例如,被禁止的数据集 DukeMMMC-ReID。因此,为ReID任务收集真实数据变得更加困难。同时,为ReID数据贴标签的人工成本仍然很高,进一步阻碍了ReID研究的开发。因此,许多方法转而为ReID算法生成合成图像作为替代物,而不是真实图像。然而,合成图像和真实图像之间存在不可避免的域间差距。在以往的方法中,生成过程以虚拟场景为基础,而其合成培训数据不能根据不同的目标真实场景自动更改。为了处理这一问题,我们提议建立一个名为TAGPerson的“目标-Awararge General”管道,用于制作合成人图像。具体来说,它涉及一种参数化的设定方法,参数是可以控制的,可以按照目标场景调整。在TAGPerson中,我们从目标场景提取信息,用它们来控制我们标尺/认识的合成图像的生成过程,这将在目标域域域内对真实图像保持较小差距。 在GIMT/remareal图像上,我们将在GM17的合成图像上实现一个高得多的合成图像。