Domain generalization (DG) for person re-identification (ReID) is a challenging problem, as there is no access to target domain data permitted during the training process. Most existing DG ReID methods employ the same features for the updating of the feature extractor and classifier parameters. This common practice causes the model to overfit to existing feature styles in the source domain, resulting in sub-optimal generalization ability on target domains even if meta-learning is used. To solve this problem, we propose a novel style interleaved learning framework. Unlike conventional learning strategies, interleaved learning incorporates two forward propagations and one backward propagation for each iteration. We employ the features of interleaved styles to update the feature extractor and classifiers using different forward propagations, which helps the model avoid overfitting to certain domain styles. In order to fully explore the advantages of style interleaved learning, we further propose a novel feature stylization approach to diversify feature styles. This approach not only mixes the feature styles of multiple training samples, but also samples new and meaningful feature styles from batch-level style distribution. Extensive experimental results show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID, yielding clear advantages in computational efficiency. Code is available at https://github.com/WentaoTan/Interleaved-Learning.
翻译:个人再身份识别( ReID) 的常规通用化( DG) 是一个具有挑战性的问题, 因为在培训过程中无法访问允许的目标域数据。 大多数现有的 DG ReID 方法在更新特征提取器和分类器参数时使用相同的特性。 这种常见做法使模型与源域的现有特征样式超配, 即使使用了元学习, 也导致目标域的超优性概括化能力。 为了解决这个问题, 我们提议了一个新颖的风格互换学习框架 。 与传统的学习战略不同, 互换学习包含两个前向传播和每个循环的后向传播。 我们使用双向模式的特性来更新特征提取器和分类器, 使用不同的前向传播器。 这帮助模型避免过度适应某些域样式。 为了充分探索风格互换学习的优势, 我们进一步建议一种新颖的特征统制化方法, 以多种培训样本的特征样式混合, 并且还从批量级风格样式/ 版本的配置中采集新的和有意义的特征样式。 广泛实验性L 格式的计算方法显示我们在大型收益分类/ Restrodevelyal dal 的计算方法中持续显示我们在大型收益的模型的模型中可以显示我们现有的收益的模型。