We propose the first Bayesian encoder for metric learning. Rather than relying on neural amortization as done in prior works, we learn a distribution over the network weights with the Laplace Approximation. We actualize this by first proving that the contrastive loss is a valid log-posterior. We then propose three methods that ensure a positive definite Hessian. Lastly, we present a novel decomposition of the Generalized Gauss-Newton approximation. Empirically, we show that our Laplacian Metric Learner (LAM) estimates well-calibrated uncertainties, reliably detects out-of-distribution examples, and yields state-of-the-art predictive performance.
翻译:我们提出第一种贝叶西亚编码器,用于衡量学习。我们不象以前的工作那样依赖神经分解,而是学会使用拉普莱特接近法对网络重量进行分布。我们首先通过证明对比性损失是一种有效的日志前科来实现这一点。然后我们提出三套方法,确保确定赫西安是肯定的。最后,我们提出了普遍化高斯-纽顿近似法的新的分解法。我们生动地表明,我们的拉普拉西亚计量学者(LAM)估算出准确的不确定因素,可靠地检测出分布以外的例子,并产生最先进的预测性能。