Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance. Recently, leveraging the supervised or semi-unsupervised learning paradigms, which benefits from the large-scale datasets and strong computing performance, has achieved a competitive performance on a specific target domain. However, when Re-ID models are directly deployed in a new domain without target samples, they always suffer from considerable performance degradation and poor domain generalization. To address this challenge, we propose a Deep Multimodal Fusion network to elaborate rich semantic knowledge for assisting in representation learning during the pre-training. Importantly, a multimodal fusion strategy is introduced to translate the features of different modalities into the common space, which can significantly boost generalization capability of Re-ID model. As for the fine-tuning stage, a realistic dataset is adopted to fine-tune the pre-trained model for better distribution alignment with real-world data. Comprehensive experiments on benchmarks demonstrate that our method can significantly outperform previous domain generalization or meta-learning methods with a clear margin. Our source code will also be publicly available at https://github.com/JeremyXSC/DMF.
翻译:由于在公共安全和视频监视方面的各种应用,重新确定身份在现实情景中起着重要作用。最近,利用受监督或半不受监督的学习模式,从大规模数据集和强大的计算性能中受益,在特定目标领域取得了有竞争力的业绩。然而,当重新确定身份模型直接在没有目标样本的新领域部署时,他们总是遭受显著的性能退化和广度差的损害。为了应对这一挑战,我们提议建立一个深多模式融合网络,以开发丰富的语义知识,协助在培训前学习的代表性。重要的是,引入了一种多式融合战略,将不同模式的特征转化为共同空间,这可以大大增强重新确定身份模型的普遍化能力。在微调阶段,采用了一种现实的数据集,对经过预先培训的模型进行微调,以更好地与现实世界数据相匹配。关于基准的全面实验表明,我们的方法可以大大超过以前的通用域或元化方法,并有一个明确的范围。我们的源代码还将在https://githhubub.com/JeremySS/ SC上公开提供。