The training loss function that enforces certain training sample distribution patterns plays a critical role in building a re-identification (ReID) system. Besides the basic requirement of discrimination, i.e., the features corresponding to different identities should not be mixed, additional intra-class distribution constraints, such as features from the same identities should be close to their centers, have been adopted to construct losses. Despite the advances of various new loss functions, it is still challenging to strike the balance between the need of reducing the intra-class variation and allowing certain distribution freedom. In this paper, we propose a new loss based on center predictivity, that is, a sample must be positioned in a location of the feature space such that from it we can roughly predict the location of the center of same-class samples. The prediction error is then regarded as a loss called Center Prediction Loss (CPL). We show that, without introducing additional hyper-parameters, this new loss leads to a more flexible intra-class distribution constraint while ensuring the between-class samples are well-separated. Extensive experiments on various real-world ReID datasets show that the proposed loss can achieve superior performance and can also be complementary to existing losses.
翻译:执行某些培训样本分配模式的培训损失功能在建立再识别(ReID)系统方面发挥着关键作用。除了基本的区别要求,即与不同身份相对应的特征不应混杂在一起,还采用了额外的类内分配限制,例如同一身份特征的特征应靠近中心,以弥补损失。尽管各种新的损失功能有所进步,但在减少类内差异和允许某些分布自由的需要之间求得平衡仍具有挑战性。在本文中,我们提议基于中心预测性的新损失,即必须把样本放在特征空间的位置上,以便我们能够从中大致预测同一类样本的中心位置。预测错误随后被视为称为“中心预测损失”(CPL)的损失。我们表明,如果不采用额外的超参数,这种新的损失会导致更灵活的类内分配限制,同时确保分类间样本能够很好地分离。对各种真实的ReID数据集进行广泛的实验表明,拟议的损失能够达到更高的性能,并且也可以补充现有的损失。