Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a "learning via translation" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of a Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.
翻译:在一个领域受过培训的重新身份( re-ID) 模型往往无法向另一个领域推广。 我们试图展示一个“ 通过翻译学习”的框架。 在基线中, 我们以不受监督的方式将标签图像从源头翻译到目标领域。 然后, 我们用监督的方法用翻译的图像培训重新身份模型。 然而, 作为这个框架的一个基本部分, 未经监督的图像转换会因翻译过程中源域标签的信息丢失而受到影响。 我们的动机是双重的。 首先, 对于每个图像, 其ID标签中包含的歧视性提示应该在翻译后加以维护。 其次, 鉴于两个领域有完全不同的人, 翻译图像应该与任何目标标识领域不同。 为此, 我们提议保留两种类型未受监督的相似之处:(1) 翻译前后图像的自我相似性;以及(2) 翻译源图像和目标图像的域差异性差异性。 在类似保护基因配对的对网络( SPGAN ) 中, 由Sian- 网络构成完全不同的人, 翻译图像应该与任何目标标识值不同。 为此, 我们提议保留两种具有竞争力的域域域的图像, 通过Shameseas- g- regradeal redualalal 进行大规模的测试。