We propose an efficient algorithm for learning mappings between two metric spaces, $\X$ and $\Y$. Our procedure is strongly Bayes-consistent whenever $\X$ and $\Y$ are topologically separable and $\Y$ is "bounded in expectation" (our term; the separability assumption can be somewhat weakened). At this level of generality, ours is the first such learnability result for unbounded loss in the agnostic setting. Our technique is based on metric medoids (a variant of Fr\'echet means) and presents a significant departure from existing methods, which, as we demonstrate, fail to achieve Bayes-consistency on general instance- and label-space metrics. Our proofs introduce the technique of {\em semi-stable compression}, which may be of independent interest.
翻译:我们提出一个高效的算法,用于在两个公尺空间($\X美元和$Y美元)之间学习绘图。只要美元和美元在地形上是可分离的,我们的程序就非常一致,而美元是“有期望的”(我们的术语;分离性假设可能受到某种程度的削弱 ) 。 在这种概括性水平上,我们的方法是非约束性损失的第一个这样的可学习性结果。 我们的技术是以标准类药物(Fr\'echet方式的一种变体)为基础的,与现有的方法有很大的差别,正如我们所证明的那样,这些方法未能在一般的例空格和标签空间度上实现贝斯一致性。我们的证据引入了可能具有独立兴趣的(em em- 半稳定压缩 ) 技术。