Active metric learning is the problem of incrementally selecting high-utility batches of training data (typically, ordered triplets) to annotate, in order to progressively improve a learned model of a metric over some input domain as rapidly as possible. Standard approaches, which independently assess the informativeness of each triplet in a batch, are susceptible to highly correlated batches with many redundant triplets and hence low overall utility. While a recent work \cite{kumari2020batch} proposes batch-decorrelation strategies for metric learning, they rely on ad hoc heuristics to estimate the correlation between two triplets at a time. We present a novel batch active metric learning method that leverages the Maximum Entropy Principle to learn the least biased estimate of triplet distribution for a given set of prior constraints. To avoid redundancy between triplets, our method collectively selects batches with maximum joint entropy, which simultaneously captures both informativeness and diversity. We take advantage of the submodularity of the joint entropy function to construct a tractable solution using an efficient greedy algorithm based on Gram-Schmidt orthogonalization that is provably $\left( 1 - \frac{1}{e} \right)$-optimal. Our approach is the first batch active metric learning method to define a unified score that balances informativeness and diversity for an entire batch of triplets. Experiments with several real-world datasets demonstrate that our algorithm is robust, generalizes well to different applications and input modalities, and consistently outperforms the state-of-the-art.
翻译:主动的衡量学习是逐步选择高功率的培训数据批量(通常为定序三胞胎)进行批量注释的问题,目的是尽可能快地改进在某些输入域中测量度的学习模型。 独立评估每批三胞胎信息度的标准方法容易产生高度关联的批量,其中有许多多余的三胞胎,因此总体效用较低。 虽然最近的一项工作 {cite{kumari2020batch} 提出了批量- 标准学习的批量- 标准关系战略, 但它们依靠临时的偏差来一次估算两个三胞胎之间的相关性。 我们提出了一个新型的批量积极的测试学习方法, 利用最大三胞胎分布最偏差的估计, 避免三胞胎之间的冗余, 从而降低最大联合酶的组合, 同时捕捉到信息性和多样性。 我们利用联合恒定的分调功能, 利用一个高效的百分数级( 连续状态) 数据批量的计算方法来构建一个可移动的解决方案, 以基于 Gram- schrentalalalalalalalalalalalalalalalalal_1 ortial_ orma_ orma_ ormax orma_ ex ormax 和 pral_ ex ex ex ex ex ex ex ex ex ex ex ex ex ex ex ex ex preval_ preval_ ex preval_ prevalxxxxxxx ASmal_ ASmal_ ASmlation_