Existing person re-identification (re-id) methods are stuck when deployed to a new unseen scenario despite the success in cross-camera person matching. Recent efforts have been substantially devoted to domain adaptive person re-id where extensive unlabeled data in the new scenario are utilized in a transductive learning manner. However, for each scenario, it is required to first collect enough data and then train such a domain adaptive re-id model, thus restricting their practical application. Instead, we aim to explore multiple labeled datasets to learn generalized domain-invariant representations for person re-id, which is expected universally effective for each new-coming re-id scenario. To pursue practicability in real-world systems, we collect all the person re-id datasets (20 datasets) in this field and select the three most frequently used datasets (i.e., Market1501, DukeMTMC, and MSMT17) as unseen target domains. In addition, we develop DataHunter that collects over 300K+ weak annotated images named YouTube-Human from YouTube street-view videos, which joins 17 remaining full labeled datasets to form multiple source domains. On such a large and challenging benchmark called FastHuman (~440K+ labeled images), we further propose a simple yet effective Semi-Supervised Knowledge Distillation (SSKD) framework. SSKD effectively exploits the weakly annotated data by assigning soft pseudo labels to YouTube-Human to improve models' generalization ability. Experiments on several protocols verify the effectiveness of the proposed SSKD framework on domain generalizable person re-id, which is even comparable to supervised learning on the target domains. Lastly, but most importantly, we hope the proposed benchmark FastHuman could bring the next development of domain generalizable person re-id algorithms.


翻译:尽管跨相机人匹配成功,但在将现有个人重新定位(重新定位)方法部署到新的看不见情景时,这些方法被卡住。最近的努力已大量投入到领域适应性的人重新定位上,在新情景中,大量无标签的数据以感应式学习的方式被利用。然而,对于每种情景,都需要首先收集足够的数据,然后培训这样的域适应性重新定位模型,从而限制其实际应用。相反,我们的目标是探索多个标签化数据集,以学习个人重新定位的普遍域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域内域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域内、 域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域内域内域内域域内内内内域域内内内内内域域内内内内内内内内内内内内内内内内内内较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较较内内内内内内内内内内内域域内内内内内较较较较较较较较较较内内内内域域域域内

1
下载
关闭预览

相关内容

最新【深度生成模型】Deep Generative Models,104页ppt
专知会员服务
69+阅读 · 2020年10月24日
零样本文本分类,Zero-Shot Learning for Text Classification
专知会员服务
95+阅读 · 2020年5月31日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
164+阅读 · 2020年3月18日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
59+阅读 · 2019年10月17日
计算机 | 入门级EI会议ICVRIS 2019诚邀稿件
Call4Papers
10+阅读 · 2019年6月24日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
无人机视觉挑战赛 | ICCV 2019 Workshop—VisDrone2019
PaperWeekly
7+阅读 · 2019年5月5日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
PTGAN for Person Re-Identification
统计学习与视觉计算组
4+阅读 · 2018年9月10日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
Arxiv
3+阅读 · 2018年4月10日
VIP会员
相关资讯
计算机 | 入门级EI会议ICVRIS 2019诚邀稿件
Call4Papers
10+阅读 · 2019年6月24日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
无人机视觉挑战赛 | ICCV 2019 Workshop—VisDrone2019
PaperWeekly
7+阅读 · 2019年5月5日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
PTGAN for Person Re-Identification
统计学习与视觉计算组
4+阅读 · 2018年9月10日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
Top
微信扫码咨询专知VIP会员