Person reidentification (re-ID) has been receiving increasing attention in recent years due to its importance for both science and society. Machine learning and particularly Deep Learning (DL) has become the main re-id tool that allowed researches to achieve unprecedented accuracy levels on benchmark datasets. However, there is a known problem of poor generalization of DL models. That is, models trained to achieve high accuracy on one dataset perform poorly on other ones and require re-training. To address this issue, we present a model without trainable parameters which shows great potential for high generalization. It combines a fully analytical feature extraction and similarity ranking scheme with DL-based human parsing used to obtain the initial subregion classification. We show that such combination to a high extent eliminates the drawbacks of existing analytical methods. We use interpretable color and texture features which have human-readable similarity measures associated with them. To verify the proposed method we conduct experiments on Market1501 and CUHK03 datasets achieving competitive rank-1 accuracy comparable with that of DL-models. Most importantly we show that our method achieves 63.9% and 93.5% rank-1 cross-domain accuracy when applied to transfer learning tasks. It is significantly higher than previously reported 30-50% transfer accuracy. We discuss the potential ways of adding new features to further improve the model. We also show the advantage of interpretable features for constructing human-generated queries from verbal description to conduct search without a query image.
翻译:近些年来,由于对科学和社会都很重要,个人再身份(re-ID)由于对科学和社会的重要性,近年来日益受到越来越多的关注。机器学习,特别是深学习(DL)已成为主要的再确认工具,使研究能够在基准数据集上达到前所未有的精确度。然而,已知DL模型的概括性问题。即,为在一个数据集上实现高精度而培训的模型在其他数据集上表现不佳,需要再培训。为了解决这个问题,我们提出了一个没有经过培训的参数模型,显示高概括性高可度的可高分数。它把完全分析性特征提取和类似性排序方案与基于DL的人类分级(DL)的特征结合起来,用来获得最初的次区域分类。我们表明,这种组合在很大程度上消除了现有分析方法的缺陷。我们使用了可解释的颜色和纹度特征,这些特征与人类可读性相类似,因此需要重新培训。为了核实我们在市场1501和CUHKK03模型上进行实验的拟议方法,其等级-1的精确性比DL模型高。最重要的是,我们还表明,我们所使用的方法在没有达到63.9%和93.5-1级的精确度的精确度方面,我们还使用了一种方法,我们进一步讨论了先前的转换。我们以前为30.5级-1级的精确度,我们为30级的排序-级的转换。我们还使用了一种方法,我们从30级的顺序对30级的精确性进行了进一步讨论了。我们是如何学习了新的转换。我们为30级-