Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance. Recently, leveraging the supervised or semi-unsupervised learning paradigms, which benefits from the large-scale datasets and strong computing performance, has achieved a competitive performance on a specific target domain. However, when Re-ID models are directly deployed in a new domain without target samples, they always suffer from considerable performance degradation and poor domain generalization. To address this challenge, in this paper, we propose DMF, a Deep Multimodal Fusion network for the general scenarios on person re-identification task, where rich semantic knowledge is introduced to assist in feature representation learning during the pre-training stage. On top of it, a multimodal fusion strategy is introduced to translate the data of different modalities into the same feature space, which can significantly boost generalization capability of Re-ID model. In the fine-tuning stage, a realistic dataset is adopted to fine-tine the pre-trained model for distribution alignment with real-world. Comprehensive experiments on benchmarks demonstrate that our proposed method can significantly outperform previous domain generalization or meta-learning methods. Our source code will also be publicly available at https://github.com/JeremyXSC/DMF.
翻译:由于在公共安全和视频监视方面的各种应用,重新确定个人身份在现实情景中起着重要作用。最近,利用从大规模数据集和强大的计算性能中受益的受监督或半不受监督的学习模式,在特定目标领域取得了有竞争力的业绩。然而,当重新确定模式直接在没有目标样本的新领域部署时,他们总是遭受显著的性能退化和广域化不良。为了应对这一挑战,我们在本文件中提议DMF,即一个深度多模式融合网络,用于个人再确定任务的一般情景,即引进丰富的语义知识,以协助在培训前阶段进行特征表现学习。此外,还采用多式联运融合战略,将不同模式的数据转化为同一特征空间,从而大大增强重新确定模式的普遍化能力。在微调阶段,我们采用了一个现实的数据集,以微调预先培训的模型与现实世界的分销协调。关于基准的全面实验表明,我们拟议的方法大大超越了以前的通用或元数据学习方法。我们的来源代码还将公开提供。