Well-calibrated probabilistic regression models are a crucial learning component in robotics applications as datasets grow rapidly and tasks become more complex. Classical regression models are usually either probabilistic kernel machines with a flexible structure that does not scale gracefully with data or deterministic and vastly scalable automata, albeit with a restrictive parametric form and poor regularization. In this paper, we consider a probabilistic hierarchical modeling paradigm that combines the benefits of both worlds to deliver computationally efficient representations with inherent complexity regularization. The presented approaches are probabilistic interpretations of local regression techniques that approximate nonlinear functions through a set of local linear or polynomial units. Importantly, we rely on principles from Bayesian nonparametrics to formulate flexible models that adapt their complexity to the data and can potentially encompass an infinite number of components. We derive two efficient variational inference techniques to learn these representations and highlight the advantages of hierarchical infinite local regression models, such as dealing with non-smooth functions, mitigating catastrophic forgetting, and enabling parameter sharing and fast predictions. Finally, we validate this approach on a set of large inverse dynamics datasets and test the learned models in real-world control scenarios.
翻译:随着数据集的迅速增长和任务变得更加复杂,精确调节的概率回归模型是机器人应用中一个关键的学习组成部分。典型回归模型通常要么是概率内核机器,其结构灵活,其规模不优于数据,要么是确定性且大可伸缩的自动数据,尽管采用限制性的参数形式和规范化不力。在本文中,我们考虑一种概率性等级模型模式,将两个世界的效益结合起来,以提供计算具有内在复杂性的高效表现规范化。提出的方法是对地方回归技术的概率解释,这些技术通过一套地方线性或多面性单位来近似非线性函数。重要的是,我们依靠巴伊西亚非参数的原则来制定灵活的模型,使其适应数据的复杂性,并可能包含无限数量的组件。我们从中得出两种有效的变化推论技术,以学习这些表达方式,并突出等级无限的本地回归模型的优点,例如处理非线性功能,减轻灾难性的遗忘,以及使参数共享和快速预测成为可能。最后,我们根据巴伊西亚非线性非线性非参数共享和多边际模型来验证这一方法,我们在一套大型数据动态测试了一套世界的模型。我们验证了这一方法。在大规模数据动态中学习了一套模拟中测试了一套实际的模型。