As robotic systems move from highly structured environments to open worlds, incorporating uncertainty from dynamics learning or state estimation into the control pipeline is essential for robust performance. In this paper we present a nonlinear particle model predictive control (PMPC) approach to control under uncertainty, which directly incorporates any particle-based uncertainty representation, such as those common in robotics. Our approach builds on scenario methods for MPC, but in contrast to existing approaches, which either constrain all or only the first timestep to share actions across scenarios, we investigate the impact of a \textit{partial consensus horizon}. Implementing this optimization for nonlinear dynamics by leveraging sequential convex optimization, our approach yields an efficient framework that can be tuned to the particular information gain dynamics of a system to mitigate both over-conservatism and over-optimism. We investigate our approach for two robotic systems across three problem settings: time-varying, partially observed dynamics; sensing uncertainty; and model-based reinforcement learning, and show that our approach improves performance over baselines in all settings.
翻译:随着机器人系统从高度结构化的环境向开放的世界移动,将动态学习或国家估计的不确定性纳入控制管道对于稳健的绩效至关重要。在本文件中,我们提出了一个非线性粒子模型预测控制(PMPC)方法,以在不确定性中进行控制,直接纳入任何基于粒子的不确定性代表,例如在机器人中常见的不确定性代表。我们的方法建立在多功能化的假想方法的基础上,但与现有的方法形成对照,后者要么限制所有或仅是分享不同情景行动的第一个时间步骤,我们调查了整个假设情景的影响。通过利用连续的连接优化实现非线性动态优化,我们的方法产生了一个有效的框架,可以适应一个系统的特定信息获取动态,以缓解过度保守主义和过度乐观主义。我们调查了我们在三个问题环境下的两个机器人系统的方法:时间变化、部分观测到的动态;遥感不确定性;基于模型的强化学习,并表明我们的方法改善了所有环境中的基线绩效。