We explore the metric and preference learning problem in Hilbert spaces. We obtain a novel representer theorem for the simultaneous task of metric and preference learning. Our key observation is that the representer theorem can be formulated with respect to the norm induced by the inner product inherent in the problem structure. Additionally, we demonstrate how our framework can be applied to the task of metric learning from triplet comparisons and show that it leads to a simple and self-contained representer theorem for this task. In the case of Reproducing Kernel Hilbert Spaces (RKHS), we demonstrate that the solution to the learning problem can be expressed using kernel terms, akin to classical representer theorems.
翻译:我们探讨了在希尔伯特空间中的度量学习和偏好学习问题。我们获得了一种新的表现定理,用于度量学习和偏好学习的同时任务。我们的关键观察是,表现定理可以针对内在问题结构中的内积引起的范数加以表述。此外,我们展示了我们的框架如何应用于从三元比较中进行度量学习的任务,并展示了它导致了一个简单而自包含的表现定理。在再生核希尔伯特空间(RKHS)的情况下,我们证明了学习问题的解可以使用与经典表现定理类似的内核项来表达。