In metric learning, the goal is to learn an embedding so that data points with the same class are close to each other and data points with different classes are far apart. We propose a distance-ratio-based (DR) formulation for metric learning. Like softmax-based formulation for metric learning, it models $p(y=c|x')$, which is a probability that a query point $x'$ belongs to a class $c$. The DR formulation has two useful properties. First, the corresponding loss is not affected by scale changes of an embedding. Second, it outputs the optimal (maximum or minimum) classification confidence scores on representing points for classes. To demonstrate the effectiveness of our formulation, we conduct few-shot classification experiments using softmax-based and DR formulations on CUB and mini-ImageNet datasets. The results show that DR formulation generally enables faster and more stable metric learning than the softmax-based formulation. As a result, using DR formulation achieves improved or comparable generalization performances.
翻译:在衡量学习中,目标是学习嵌入,使同一类的数据点彼此接近,不同类的数据点相距甚远。我们建议采用基于远程的(DR)配方,用于衡量学习。与基于软式的(DR)配方一样,它模拟了$p(y=c ⁇ x),这是查询点美元属于某类美元的一种可能性。DR配方有两个有用的属性。首先,相应的损失不受嵌入规模变化的影响。第二,它为代表类的点提供了最佳(最高或最低)分类信任分。为了显示我们的配方的有效性,我们用基于软式的(DR)配方和基于软式的(DR)配方在CUB和微型-ImagNet数据集上进行了几度的分类实验。结果显示,DR配方通常比基于软式的配方能更快、更稳定的计量学习。结果是,使用DR配方实现改进或可比的通用性能。