度量学习的目的为了衡量样本之间的相近程度,而这也正是模式识别的核心问题之一。大量的机器学习方法,比如K近邻、支持向量机、径向基函数网络等分类方法以及K-means聚类方法,还有一些基于图的方法,其性能好坏都主要有样本之间的相似度量方法的选择决定。 度量学习通常的目标是使同类样本之间的距离尽可能缩小,不同类样本之间的距离尽可能放大。

VIP内容

题目: Continual Learning of Object Instances

摘要: 我们建议实例持续学习——一种将持续学习的概念应用于区分相同对象类别的实例的任务的方法。我们特别关注car对象,并通过度量学习逐步学会区分car实例与其他实例。我们从评估当前的技术开始我们的论文。在现有的方法中,灾难性遗忘是显而易见的,我们提出了两个补救措施。首先,通过归一化交叉熵对度量学习进行正则化。其次,我们使用合成数据传输来扩充现有的模型。我们在三个大型数据集上进行了大量的实验,使用了两种不同的体系结构,采用了五种不同的持续学习方法,结果表明,标准化的交叉熵和合成转移可以减少现有技术中的遗忘。

成为VIP会员查看完整内容
0
15

最新内容

For many data mining and machine learning tasks, the quality of a similarity measure is the key for their performance. To automatically find a good similarity measure from datasets, metric learning and similarity learning are proposed and studied extensively. Metric learning will learn a Mahalanobis distance based on positive semi-definite (PSD) matrix, to measure the distances between objectives, while similarity learning aims to directly learn a similarity function without PSD constraint so that it is more attractive. Most of the existing similarity learning algorithms are online similarity learning method, since online learning is more scalable than offline learning. However, most existing online similarity learning algorithms learn a full matrix with d 2 parameters, where d is the dimension of the instances. This is clearly inefficient for high dimensional tasks due to its high memory and computational complexity. To solve this issue, we introduce several Sparse Online Relative Similarity (SORS) learning algorithms, which learn a sparse model during the learning process, so that the memory and computational cost can be significantly reduced. We theoretically analyze the proposed algorithms, and evaluate them on some real-world high dimensional datasets. Encouraging empirical results demonstrate the advantages of our approach in terms of efficiency and efficacy.

0
0
下载
预览

最新论文

For many data mining and machine learning tasks, the quality of a similarity measure is the key for their performance. To automatically find a good similarity measure from datasets, metric learning and similarity learning are proposed and studied extensively. Metric learning will learn a Mahalanobis distance based on positive semi-definite (PSD) matrix, to measure the distances between objectives, while similarity learning aims to directly learn a similarity function without PSD constraint so that it is more attractive. Most of the existing similarity learning algorithms are online similarity learning method, since online learning is more scalable than offline learning. However, most existing online similarity learning algorithms learn a full matrix with d 2 parameters, where d is the dimension of the instances. This is clearly inefficient for high dimensional tasks due to its high memory and computational complexity. To solve this issue, we introduce several Sparse Online Relative Similarity (SORS) learning algorithms, which learn a sparse model during the learning process, so that the memory and computational cost can be significantly reduced. We theoretically analyze the proposed algorithms, and evaluate them on some real-world high dimensional datasets. Encouraging empirical results demonstrate the advantages of our approach in terms of efficiency and efficacy.

0
0
下载
预览
父主题
Top