Unsupervised domain adaptive (UDA) person re-identification (re-ID) aims to learn identity information from labeled images in source domains and apply it to unlabeled images in a target domain. One major issue with many unsupervised re-identification methods is that they do not perform well relative to large domain variations such as illumination, viewpoint, and occlusions. In this paper, we propose a Synthesis Model Bank (SMB) to deal with illumination variation in unsupervised person re-ID. The proposed SMB consists of several convolutional neural networks (CNN) for feature extraction and Mahalanobis matrices for distance metrics. They are trained using synthetic data with different illumination conditions such that their synergistic effect makes the SMB robust against illumination variation. To better quantify the illumination intensity and improve the quality of synthetic images, we introduce a new 3D virtual-human dataset for GAN-based image synthesis. From our experiments, the proposed SMB outperforms other synthesis methods on several re-ID benchmarks.
翻译:未经监督的域适应性(UDA)人重新识别(重新识别)的目的是从源域的标签图像中学习身份信息,并将其应用于目标域的未标记图像。许多未经监督的重新识别方法的一个主要问题是,与大域变异相比,如照明、观点和隔离等,它们没有很好地发挥作用。在本文件中,我们提议建立一个综合模型库,以处理未经监督的人再识别中的污染变异。拟议的SMB由几个用于特征提取的革命神经网络(CNN)和用于远程测量的马哈拉诺比矩阵组成。它们接受培训时使用具有不同照明条件的合成数据,这些条件的协同效应使SMB对照明变异具有很强性。为了更好地量化照明强度和提高合成图像的质量,我们为基于GAN的图像合成引入了新的3D虚拟人类数据集。从我们的实验中,拟议的SMB超越了几个再识别基准的其他合成方法。