We present a framework for performing regression between two Hilbert spaces. We accomplish this via Kirszbraun's extension theorem -- apparently the first application of this technique to supervised learning -- and analyze its statistical and computational aspects. We begin by formulating the correspondence problem in terms of quadratically constrained quadratic program (QCQP) regression. Then we describe a procedure for smoothing the training data, which amounts to regularizing hypothesis complexity via its Lipschitz constant. The Lipschitz constant is tuned via a Structural Risk Minimization (SRM) procedure, based on the covering-number risk bounds we derive. We apply our technique to learn a transformation between two robotic manipulators with different embodiments, and report promising results.
翻译:我们提出了一个在希尔伯特两个空间之间进行回归的框架。我们通过Kirszbraun的扩展理论(显然是首次应用这一技术来监督学习)实现这一点,并分析其统计和计算方面。我们首先从二次受限二次曲线程序(QCQP)回归的角度来制定对应问题。然后我们描述一个平滑培训数据的程序,这相当于通过Lipschitz常数将假设复杂性正规化。利普施茨常数通过结构风险最小化程序(SRM)调整,基于我们得出的覆盖数风险界限。我们运用我们的技术来学习两个具有不同形体的机器人操纵器(QCQP)之间的转换,并报告有希望的结果。