Deep distance metric learning (DDML), which is proposed to learn image similarity metrics in an end-to-end manner based on the convolution neural network, has achieved encouraging results in many computer vision tasks.$L2$-normalization in the embedding space has been used to improve the performance of several DDML methods. However, the commonly used Euclidean distance is no longer an accurate metric for $L2$-normalized embedding space, i.e., a hyper-sphere. Another challenge of current DDML methods is that their loss functions are usually based on rigid data formats, such as the triplet tuple. Thus, an extra process is needed to prepare data in specific formats. In addition, their losses are obtained from a limited number of samples, which leads to a lack of the global view of the embedding space. In this paper, we replace the Euclidean distance with the cosine similarity to better utilize the $L2$-normalization, which is able to attenuate the curse of dimensionality. More specifically, a novel loss function based on the von Mises-Fisher distribution is proposed to learn a compact hyper-spherical embedding space. Moreover, a new efficient learning algorithm is developed to better capture the global structure of the embedding space. Experiments for both classification and retrieval tasks on several standard datasets show that our method achieves state-of-the-art performance with a simpler training procedure. Furthermore, we demonstrate that, even with a small number of convolutional layers, our model can still obtain significantly better classification performance than the widely used softmax loss.
翻译:深距离测量学习(DDML)是建议以进化神经网络为基础,以端到端的方式学习图像相似度量度的,它在许多计算机视觉任务中取得了令人鼓舞的成果。 嵌入空间中的$L2美元正规化已经用于改进DDML方法的性能。 但是,通常使用的Euclidean距离已不再是L2美元标准化嵌入空间的准确度量,即超视距。目前DDML方法的另一个挑战是,其损失功能通常基于僵硬的数据格式,如三重图层。因此,需要额外程序来以特定格式编制数据。此外,它们的损失来自有限的样本,导致缺乏嵌入空间的全球观。在本文中,我们用Euclidean距离来取代类似空间的近距离,以更好地利用$L2美元标准沉积,这可以降低尺寸的诅咒。更具体地说,更精确地说,一个基于更精确的模型培训功能,以特定的格式编制数据格式编制数据。一个基于Misal-Floral的系统, 学习一种高效的系统空间标准分布。我们使用一个更高的空间标准,一个更高的空间标准,用来测量。