Deep metric learning (DML) is a cornerstone of many computer vision applications. It aims at learning a mapping from the input domain to an embedding space, where semantically similar objects are located nearby and dissimilar objects far from another. The target similarity on the training data is defined by user in form of ground-truth class labels. However, while the embedding space learns to mimic the user-provided similarity on the training data, it should also generalize to novel categories not seen during training. Besides user-provided groundtruth training labels, a lot of additional visual factors (such as viewpoint changes or shape peculiarities) exist and imply different notions of similarity between objects, affecting the generalization on the images unseen during training. However, existing approaches usually directly learn a single embedding space on all available training data, struggling to encode all different types of relationships, and do not generalize well. We propose to build a more expressive representation by jointly splitting the embedding space and the data hierarchically into smaller sub-parts. We successively focus on smaller subsets of the training data, reducing its variance and learning a different embedding subspace for each data subset. Moreover, the subspaces are learned jointly to cover not only the intricacies, but the breadth of the data as well. Only after that, we build the final embedding from the subspaces in the conquering stage. The proposed algorithm acts as a transparent wrapper that can be placed around arbitrary existing DML methods. Our approach significantly improves upon the state-of-the-art on image retrieval, clustering, and re-identification tasks evaluated using CUB200-2011, CARS196, Stanford Online Products, In-shop Clothes, and PKU VehicleID datasets.
翻译:深度指标学习( DML) 是许多计算机视觉应用的基石。 它的目的是从输入域到嵌入空间学习映射, 里面的精度相似的物体位于附近, 与另一个相异。 培训数据的目标相似性由用户以地面真相类标签的形式定义。 但是, 虽然嵌入空间学习模仿用户提供的培训数据相似性, 但是它也应该推广到培训期间没有看到的新分类。 除了用户提供的地面光谱培训标签之外, 还有许多额外的视觉因素( 如观点变化或形状特殊性) 存在, 并且意味着不同对象之间的相似性概念, 影响培训期间所见图像的概观。 然而, 现有方法通常直接学习所有现有培训数据上单一的嵌入空间, 努力将所有不同类型的关系编码, 并且不甚全面化。 我们提议通过将嵌入空间和数据分级分解到较小的分节中, 我们连续关注培训数据中的较小部分( 如观点改变或形状 ), 降低其变异性、 将数据递归到我们所学的子。